Saturday, October 29, 2011

Gaviota Endgame Tablebases

One thing I leaned in the past about coding endgame knowledge is, that when you think you have done it correctly - you only think that. Chances that you coded some big mistakes are really high.

One example is the endgame king and 2 bishop vs bishop. This is seen in theory as an easy to defend draw. So you might think when you encounter it on the board you just return a draw score instead of further search and eval. This is a big mistake because it will make the engine blind. There are a lot of positions in a KBBKB endgame that are not drawn.

Trivial ones are where the defender bishop can be captured right away, but also non trivial ones where the attacker is able to separate the defender king and bishop, pin the king to bishop and capture it later. To detect those not so few exceptions you need a rule set that tells you whether this position is a safe draw or not.

Missing a draw is not so bad, the engine just keeps searching until it finds a know draw. Announcing a draw in a position that is a forced win is fatal because the engine might play a stupid move (giving away its bishop) because of that.

So the rule set must be verified against a database that it only announces correct draws. In my former attempts with 4 man positions I built the required database my self by retro analyses from mate positions. But I have no algorithm that deals with 5 man positions yet and thought before inventing this wheel I have a look at the available data.

As I only need the information whether a positions is a draw and not the actual distance to mate information I thought that bitbases are enough. Here only a Win - Draw - Loss flag is returned. I found a bitbase set from Daniel Shawul, called the scorpio bitbases. Those come with a DLL that lets you probe the bitbase. Usage is pretty easy but unfortunately it does not work for me. After a few probes the program crashed with an exception caused by a function in the DLL. So I left those alone.

Next try where the Gaviota Tablebases from Miguel Ballicora. This is a bit more work to setup, because here the complete source is provided and because 4 different compression schemes are supported its quite a heap of files. I was not able to compile it with my wingw edition of gcc. It complained about a missing "resource.h" file. But in MSVC it worked, I was even able to compile a static library from it, so I don't have to include all those single files in my engine project.

I must say the design is pretty good, flexible and clear. Once you understood how they work it is easy to probe them. Well done Miguel !

So I will verify my rules now against the Gaviota Tablebases and whenever maybe in the future I decide to use table bases also for the actual search of the engine I'm sure I will use the Gaviota ones. Also of course most of the interface code to use them in my framework is now already written.
 

Tuesday, October 25, 2011

Still lacking basic endgame knowledge

iCE 0.2 is currently participating in Oliver Devilles Chess War Division G and it is not doing that good. It drew the last 3 games, in most cases from a stronger position not recognizing that it was approaching a drawish endgame.

The last endgame it entered was a KBBKB endgame (king and 2 bishops against king and bishop) which is of course a draw but iCE scored it better than 4 pawns. This is of course fatal and lets the program (and its author) look rather stupid.

So this is the next thing on the list. But first I must decide how to implement it.

Friday, October 21, 2011

Setting a new goal

I just realized that this blog still lists as goal of this project an engine that does not play stupid moves and is strong enough to beat myself.

I think this is accomplished, I don't remember the last time I was able to beat my engine, even if I give the engine a short time control and use infinite time for myself I have no chance.

So its time to set a new goal.

I will try to make the engine strong enough that it enters the Top 100 of the CCRL (Computer Chess Rating List). This is really a long term goal and will keep me motivated for the next time.

Futile caching attempts

When trying to improving the engine I try to focus on always only one of the different areas where an improvement is possible for some time. The main areas I see are
  • better evaluation (estimate the real value of a position more accurately)
  • better search (try to search promising lines deeper and silly lines shallow)
  • execution speed (process more nodes per second)
Any improvement in those areas should improve the strength of the engine.

After adding the book which is more a technical must have I try now to improve the execution speed of the engine, squeezing out some more nodes per second.

One idea I tried was the caching of the attack patterns for rook and bishop movers. The idea is simple, if the attack pattern is calculated for a piece on a square, store the result pattern in a small array together with the relevant obstructions currently present on the board that formed that pattern.

If you later ask for the attack pattern of that piece on that square first look whether the board contains still the same relevant obstructions and if so just return the pattern from the small array. If not retrieve it from scratch and update the array.

Simple caching, cool idea and not hard to implement.

... and does not speed up the program at all.

In fact it slowed it down. The reason probably is that the magic bitboards I use are so fast that the caching overhead is more than the actual magic bitboard look-up it tries to avoid. As access to the main memory is usually the bottleneck I hoped for better results.  But probably those accesses that produced a cache hit in my implementation would have produced a cache hit anyway, because the data was still in one of the CPU caches. So I discard those changes for now and go back to my old code base.

One idea is still left to try. If I need a queen attack pattern I currently use

queenAttacks(sq) = rookAttacks(sq) | bishopAttacks(sq)

so caching the queen attacks might safe more work and produce a gain at the end. But this requires more code changes throughout the whole program (I have no queenAttacks function today) so I will postpone that into the future.

Wednesday, October 19, 2011

Fixing a severe time control bug

In a recent game iCE 0.2 played against cheng in Chess Wars Division G my engine lost on time. The position was lost anyway with cheng being 2 pawns up but as a fellow programmer once stated "time losses are a bad thing" this was somehow embarrassing.

I used the log and it showed that the problem hits when the game is close to the next time control and the search score drops suddenly quite a lot. If the search score drops iCE desperately allocates more time for the move to maybe find a way out of the mess when searching a bit longer. And in some cases it allocated more additional time than it had until the next time control (therefore losing on time).

This is rather severe and also a bit embarrassing so although I already closed the iCE 0.2 development branch I decided to reopen it and fixing the bug there, not waiting for the release of iCE 0.3 somewhere next year.

So a new version of iCE 0.2 (with build number 1092) is available at my homepage where the bug is fixed. Whether it has a big impact on the playing strength I don't know but I assume not, because it caused mainly losses in games that were lost anyway.

One interesting observation yet

In order to test the new release I let the fixed iCE 0.2 play 250 games against the most recent iCE 0.3 beta (also with fixed TC) using a 40 moves in 1 minute time control setting and it shows that the new book feature of iCE 0.3 helps the engine more than I thought (at least when using short TCs).

   Engine       Score   
1: iCE 0.3    146,5/250 
2: iCE 0.2    103,5/250

Sunday, October 16, 2011

Building and using a chess opening book

After the release of iCE 0.2 I went to something that was on my list for some time. iCE has no opening theory knowledge so far. It calculates every move from the very start of the game. It wasn't a priority so far because iCE knew the basic chess opening principles and calculated the moves from the theory most of the time anyway.

But it may hurt in games with short time controls and it will now lead to some diversity in the games because otherwise the games iCE plays are very deterministic.

In my first engine mACE I assembled a small book by hand using a chess book I had. After some weeks I had maybe 7000 positions and 9000 moves in a small book. I could have used that book for iCE also - but I wanted to try a different approach.

I created a book building application for iCE. This application can read in information from actual games and calculates a statistic how often a move was played by a player that later did win the game. From this database I was then able to extract positions and moves. Depending on the settings I could create easily books of different size.

At the moment my database contains of 7.4 Mio position-move records. It looks like this

Start Position:

No    Move      Prob    Played
------------------------------
 1    e2e4      46      93.106
 2    d2d4      35      72.529
 3    g1f3       9      18.729
 4    c2c4       9      17.756
 5    g2g3       1       1.343
 6    b2b3       0         585
 7    f2f4       0         471
 8    b1c3       0         416
 9    b2b4       0         229
10    a2a3       0          54
11    h2h3       0          45
12    d2d3       0          44
13    e2e3       0          36
14    c2c3       0          33
15    g2g4       0          15
16    f2f3       0           3
17    a2a4       0           1
18    g1h3       0           1
19    b1a3       0           1

when a book is created the here first 5 moves are copied to the book, moves with a probability of 0% are skipped. So the book for the start position for white contains the following records

No    Move      Prob
--------------------
 1    e2e4      46
 2    d2d4      35
 3    c2c4       9
 4    g1f3       9
 5    g2g3       1

When the engine searches it creates a random number between 1 and 100 and it plays the move where the added probabilities of the moves hit or exceed the random number for the first time.

e.g. the random number 50 is created, then move d2d4 is played because 46 is still less than 50, but 46 + 35 is exceeding 50.

This seems to work very well even although I use only a very small subset of the 7.4 Mio positions in the actual book.

How this opening theory knowledge impacts the overall strength of iCE I don't know. I assume not much. To be really helpful the book probably must be tuned a lot which I actually don't intend to do at the moment. It's a new point on the todo list however.