Saturday, November 12, 2011

You can't win with 2 knights, so it's a draw, isn't it ?

I just encountered an awkward bug in iCE. I had some simple rules in my evaluation that check whether there is still enough mating material on the board. One of those rules was


If the only material left on the board is 2 knights it is a draw.

This is in general correct, unfortunately not always and according Murphy's law things that can go wrong will go wrong.

I was still coding 5 man endgame knowledge and I was just working on a a very simple one. King and 2 knights vs King and 1 knight (KNNKN). This is also almost always a draw, but as the defender side still has a knight, it can now be mated where without the knight the move sequence would only lead to a stalemate. So some easy checks to recognize those positions and otherwise declare a draw.

I then checked whether it works correctly in the engine with that position

 White moves and mates in 4
A possible mate sequence is

1. Kc2  Nc5    2. Nb4+  Ka1    3. Nd4  Nb3     4. Nxb3#

Leading to this position
Mate !
So now Murphy's law hits and iCE was not recognizing this Mate, because it obeyed the 2 knights rule and announced a draw. iCE was therefore not able to solve this simple Mate in 4. Even if this probably never appears in a real game as problem it bothers me to know it's there. I modified the 2 knights draw rule slightly so that is does not announce a draw if the defender is in check and has nowhere to go. Knowing only 2 knights are on the board this is a simple mate check (much simpler than in a normal position). And now it solves this simple position correctly in no time.

iCE 0.3 build 491 [2011.11.12]
position fen 8/8/8/1N6/n7/3N4/k7/2K5 w - - 0 1
go depth 8
info depth 1 time 0 nodes 34 pv b5d6  nps 33999 score cp 22
info depth 2 time 0 nodes 116 pv c1c2 a4b2  nps 115999 score cp 22 
info depth 3 time 16 nodes 333 pv c1c2 a4b2 d3b4 a2a1 b5c7  nps 20812 score cp 23
info depth 4 time 0 nodes 606 pv c1c2 a4b2 d3b4 a2a1 b5c7 b2d1  nps 605999 score cp 23
info depth 5 time 15 nodes 1012 pv c1c2 a4b2 d3b4 a2a1 b5d4 b2d1 d4b3 nps 67466 score mate 4
info depth 6 time 16 nodes 1490 pv c1c2 a4b2 d3b4 a2a1 b5d4 b2d1 d4b3 nps 93124 score mate 4
info depth 7 time 15 nodes 1614 pv c1c2 a4b2 d3b4 a2a1 b5d4 b2d1 d4b3 nps 107600 score mate 4
bestmove c1c2 ponder a4b2



 

Thursday, November 10, 2011

More trivial endgame knowledge

I'm moving forward in enabling iCE to distinguish between drawn and winnable endgames. And here sometimes the stuff that the chess theory considers as trivial is nevertheless an important case for a chess engine to know.

Pawnless endgames with 2 minors vs a rook, like white playing with both bishops and black playing with a rook, are such a case. Theory says "trivial" and "unimportant" because the rook sacrifices for a minor and it's a draw then. For humans this rule is ok, for chess engine it is not, because in the above example about 10% of positions are winnable. If the engine does not know which positions might be winnable it will carelessly enter such a position as defending side and lose. In all variations of KmmKR (even in the king and two knights vs king and rook endgame) for both sides positions exists that can be won.

 KmmKR - whoever moves first wins

So making the evaluation smart enough to handle those games correctly you do two things
  1. Try to detect that the position is dead draw
  2. If it is not a draw try to give it a score that allows the engine to make progress (like a king at the edge is bad, so the winning side will make moves that force the defending side to the edge)
Step 1 is pretty tough, because to recognize a position as draw just by statically looking at the location of the pieces and the side to move, is not trivial. Without a bitbase it is nearly impossible to define a perfect draw recognizer or it would be so complex that it would also be very slow. So the draw recognizer uses heuristics that detect patterns that indicate winning potential and then plays safe and outputs "No draw".

An simple example is the pattern "king in the corner". There are a lot of winning positions when 1 side has the king in the corner, so it is safer to say "No draw". Then you don't make an error but you miss of course all draws in positions with a cornered king.

So you refine the pattern, e.g. only if the king is in the corner and the opponent king is also near the same corner then we say "No draw". This misses less draws but the pattern is more complex. So at the end it is a tradeoff between the complexity of the patterns and the draws that you miss.

Step 2 is then pretty simply, because the things that define the quality of a position are pretty easy, basically it is king distance from each other, king distance from edge or the corner the bishop controls.

So for now I have this type of endgame knowledge added. the recognizers are not perfect (as explained above) but hopefully good enough.

This is the performance of the recognizers, especially the wins (mostly for the side with the rook) with 2 knights are difficult to detect.

KbbKR
Incorrect Draws : 0
Spotted   Draws : 416.158.800

Missed    Draws : 730.608.536
Spotted   Wins  : 119.989.080

KbnKR
Incorrect Draws :           0
Spotted   Draws : 425.127.870

Missed    Draws : 675.820.802
Spotted   Wins  : 206.160.824

KnnKR
Incorrect Draws :           0
Spotted   Draws : 326.444.616

Missed    Draws : 972.419.472
Spotted   Wins  :  43.430.424

Tuesday, November 1, 2011

The KBBKB ruleset is done

So as announced in the previous post I was working on a rule set to recognize a draw in a king and 2 bishops vs bishop endgame. I verified the rule set against the Gaviota table base for that endgame and it is now correct. It was unfortunately much more complicated as I first thought.
For simplicity I always assume White is the side with the 2 bishops, so I have to check all cases where the white king is on one of the 10 core squares in a pawnless endgame (A1, B1, C1, D1, B2, C2, D2, C3, D3, D4). Due to symmetry reasons this is enough. For those 10 squares I now recognize

Incorrect Draws :           0
Spotted   Draws : 146.068.302
Missed    Draws :  44.341.638
Spotted   Wins  :  16.484.560

which is pretty good as in the missed draws also a lot of positions are contained where the attacker bishops travel on the same color, which I did not bother to recognize.

So another little piece of endgame knowledge added. If iCE 0.2 would have already possessed it, it would have won at least 1 more game :-)

Still a lot is missing, like King Rook and Minor vs King Rook which also did cost half a point already.

Saturday, October 29, 2011

Gaviota Endgame Tablebases

One thing I leaned in the past about coding endgame knowledge is, that when you think you have done it correctly - you only think that. Chances that you coded some big mistakes are really high.

One example is the endgame king and 2 bishop vs bishop. This is seen in theory as an easy to defend draw. So you might think when you encounter it on the board you just return a draw score instead of further search and eval. This is a big mistake because it will make the engine blind. There are a lot of positions in a KBBKB endgame that are not drawn.

Trivial ones are where the defender bishop can be captured right away, but also non trivial ones where the attacker is able to separate the defender king and bishop, pin the king to bishop and capture it later. To detect those not so few exceptions you need a rule set that tells you whether this position is a safe draw or not.

Missing a draw is not so bad, the engine just keeps searching until it finds a know draw. Announcing a draw in a position that is a forced win is fatal because the engine might play a stupid move (giving away its bishop) because of that.

So the rule set must be verified against a database that it only announces correct draws. In my former attempts with 4 man positions I built the required database my self by retro analyses from mate positions. But I have no algorithm that deals with 5 man positions yet and thought before inventing this wheel I have a look at the available data.

As I only need the information whether a positions is a draw and not the actual distance to mate information I thought that bitbases are enough. Here only a Win - Draw - Loss flag is returned. I found a bitbase set from Daniel Shawul, called the scorpio bitbases. Those come with a DLL that lets you probe the bitbase. Usage is pretty easy but unfortunately it does not work for me. After a few probes the program crashed with an exception caused by a function in the DLL. So I left those alone.

Next try where the Gaviota Tablebases from Miguel Ballicora. This is a bit more work to setup, because here the complete source is provided and because 4 different compression schemes are supported its quite a heap of files. I was not able to compile it with my wingw edition of gcc. It complained about a missing "resource.h" file. But in MSVC it worked, I was even able to compile a static library from it, so I don't have to include all those single files in my engine project.

I must say the design is pretty good, flexible and clear. Once you understood how they work it is easy to probe them. Well done Miguel !

So I will verify my rules now against the Gaviota Tablebases and whenever maybe in the future I decide to use table bases also for the actual search of the engine I'm sure I will use the Gaviota ones. Also of course most of the interface code to use them in my framework is now already written.
 

Tuesday, October 25, 2011

Still lacking basic endgame knowledge

iCE 0.2 is currently participating in Oliver Devilles Chess War Division G and it is not doing that good. It drew the last 3 games, in most cases from a stronger position not recognizing that it was approaching a drawish endgame.

The last endgame it entered was a KBBKB endgame (king and 2 bishops against king and bishop) which is of course a draw but iCE scored it better than 4 pawns. This is of course fatal and lets the program (and its author) look rather stupid.

So this is the next thing on the list. But first I must decide how to implement it.

Friday, October 21, 2011

Setting a new goal

I just realized that this blog still lists as goal of this project an engine that does not play stupid moves and is strong enough to beat myself.

I think this is accomplished, I don't remember the last time I was able to beat my engine, even if I give the engine a short time control and use infinite time for myself I have no chance.

So its time to set a new goal.

I will try to make the engine strong enough that it enters the Top 100 of the CCRL (Computer Chess Rating List). This is really a long term goal and will keep me motivated for the next time.

Futile caching attempts

When trying to improving the engine I try to focus on always only one of the different areas where an improvement is possible for some time. The main areas I see are
  • better evaluation (estimate the real value of a position more accurately)
  • better search (try to search promising lines deeper and silly lines shallow)
  • execution speed (process more nodes per second)
Any improvement in those areas should improve the strength of the engine.

After adding the book which is more a technical must have I try now to improve the execution speed of the engine, squeezing out some more nodes per second.

One idea I tried was the caching of the attack patterns for rook and bishop movers. The idea is simple, if the attack pattern is calculated for a piece on a square, store the result pattern in a small array together with the relevant obstructions currently present on the board that formed that pattern.

If you later ask for the attack pattern of that piece on that square first look whether the board contains still the same relevant obstructions and if so just return the pattern from the small array. If not retrieve it from scratch and update the array.

Simple caching, cool idea and not hard to implement.

... and does not speed up the program at all.

In fact it slowed it down. The reason probably is that the magic bitboards I use are so fast that the caching overhead is more than the actual magic bitboard look-up it tries to avoid. As access to the main memory is usually the bottleneck I hoped for better results.  But probably those accesses that produced a cache hit in my implementation would have produced a cache hit anyway, because the data was still in one of the CPU caches. So I discard those changes for now and go back to my old code base.

One idea is still left to try. If I need a queen attack pattern I currently use

queenAttacks(sq) = rookAttacks(sq) | bishopAttacks(sq)

so caching the queen attacks might safe more work and produce a gain at the end. But this requires more code changes throughout the whole program (I have no queenAttacks function today) so I will postpone that into the future.

Wednesday, October 19, 2011

Fixing a severe time control bug

In a recent game iCE 0.2 played against cheng in Chess Wars Division G my engine lost on time. The position was lost anyway with cheng being 2 pawns up but as a fellow programmer once stated "time losses are a bad thing" this was somehow embarrassing.

I used the log and it showed that the problem hits when the game is close to the next time control and the search score drops suddenly quite a lot. If the search score drops iCE desperately allocates more time for the move to maybe find a way out of the mess when searching a bit longer. And in some cases it allocated more additional time than it had until the next time control (therefore losing on time).

This is rather severe and also a bit embarrassing so although I already closed the iCE 0.2 development branch I decided to reopen it and fixing the bug there, not waiting for the release of iCE 0.3 somewhere next year.

So a new version of iCE 0.2 (with build number 1092) is available at my homepage where the bug is fixed. Whether it has a big impact on the playing strength I don't know but I assume not, because it caused mainly losses in games that were lost anyway.

One interesting observation yet

In order to test the new release I let the fixed iCE 0.2 play 250 games against the most recent iCE 0.3 beta (also with fixed TC) using a 40 moves in 1 minute time control setting and it shows that the new book feature of iCE 0.3 helps the engine more than I thought (at least when using short TCs).

   Engine       Score   
1: iCE 0.3    146,5/250 
2: iCE 0.2    103,5/250

Sunday, October 16, 2011

Building and using a chess opening book

After the release of iCE 0.2 I went to something that was on my list for some time. iCE has no opening theory knowledge so far. It calculates every move from the very start of the game. It wasn't a priority so far because iCE knew the basic chess opening principles and calculated the moves from the theory most of the time anyway.

But it may hurt in games with short time controls and it will now lead to some diversity in the games because otherwise the games iCE plays are very deterministic.

In my first engine mACE I assembled a small book by hand using a chess book I had. After some weeks I had maybe 7000 positions and 9000 moves in a small book. I could have used that book for iCE also - but I wanted to try a different approach.

I created a book building application for iCE. This application can read in information from actual games and calculates a statistic how often a move was played by a player that later did win the game. From this database I was then able to extract positions and moves. Depending on the settings I could create easily books of different size.

At the moment my database contains of 7.4 Mio position-move records. It looks like this

Start Position:

No    Move      Prob    Played
------------------------------
 1    e2e4      46      93.106
 2    d2d4      35      72.529
 3    g1f3       9      18.729
 4    c2c4       9      17.756
 5    g2g3       1       1.343
 6    b2b3       0         585
 7    f2f4       0         471
 8    b1c3       0         416
 9    b2b4       0         229
10    a2a3       0          54
11    h2h3       0          45
12    d2d3       0          44
13    e2e3       0          36
14    c2c3       0          33
15    g2g4       0          15
16    f2f3       0           3
17    a2a4       0           1
18    g1h3       0           1
19    b1a3       0           1

when a book is created the here first 5 moves are copied to the book, moves with a probability of 0% are skipped. So the book for the start position for white contains the following records

No    Move      Prob
--------------------
 1    e2e4      46
 2    d2d4      35
 3    c2c4       9
 4    g1f3       9
 5    g2g3       1

When the engine searches it creates a random number between 1 and 100 and it plays the move where the added probabilities of the moves hit or exceed the random number for the first time.

e.g. the random number 50 is created, then move d2d4 is played because 46 is still less than 50, but 46 + 35 is exceeding 50.

This seems to work very well even although I use only a very small subset of the 7.4 Mio positions in the actual book.

How this opening theory knowledge impacts the overall strength of iCE I don't know. I assume not much. To be really helpful the book probably must be tuned a lot which I actually don't intend to do at the moment. It's a new point on the todo list however.

Wednesday, September 7, 2011

ice v0.2 Build 1085 - maintenance release

Soon after the release of iCE 0.2 I received feedback that the engine has problems on certain systems, mainly 64 bit operating systems. While it is a Win32 compile and only tested on Win32 systems I still want it to run on 64 bit systems too.

So I compiled a maintenance release of it that addresses some potential problems that let iCE fail on certain systems. If you experience problems with the old one, give the new one a try. Strength of both 0.2 versions is almost identical as no change to eval or search was made.

If the new one still fails for you I welcome feedback (with the setup you are using meaning CPU type, OS and Chess GUI and what settings you set for iCE to play). I will have a look at it.

Monday, August 29, 2011

iCE 0.2 is about to be released

I think it is now time to release the 2nd version of my chess engine. All the changes I did in the last 6 month should summarize into a nice improvement compared with iCE 0.1

last thing I activated is the permanent brain feature. I designed the engine from the beginning for it but did not enable it because of the more difficult protocol handshake and it makes testing more difficult. But as most other engines use a permanent brain (means they think also when its the opponents turn) I like to level the ground.

A final private tournament should give me the confidence that the engine is stable and it is really an improvement. I paired it against the same set of opponents that I paired its first version with and it did pretty good. Only Ruffian is still giving it a hard time.

Tournament
RankEngineCountryScoreRuIce 0.2
1Ruffian 1.0.5Sweden44,0/48· ·· ·· ··=11101=1
2Ice 0.2 Germany39,5/48=00010=0· ·· ·· ··
3Resp019 Germany27,5/480000000000000000
4GERBIL USA16,5/48001000000000=000
5Bestia_090 Ukraine14,5/48000000000000=010
6Ice 0.1 Germany 14,5/480=00000=00000000
7BigLion Cameroon11,5/4800000000000000=0











 168 games played / Tournament is finished

Tournament start: 2011.08.27, 14:42:18
Latest update: 2011.08.29, 09:00:06
Site/ Country: Germany
Level: Tournament 40/5, Ponder ON
Hardware: Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHzDual 2394 MHz with 1.992 MB Memory
Operating system: Microsoft Windows XP Professional Service Pack 3 (Build 2600)



Friday, August 12, 2011

Chess Engine Move Ordering Statistics

As stated in my previous post move ordering in alpha beta searches is very important to cut down the size of the search tree. I lately spend a lot of time and effort to improve the move ordering and wasn't that successful to shrink the search tree no matter what I tried.

I wondered whether the reason that I did not improve was the fact that the remaining room left for improvement was so small. So I started to measure the quality of the move order in my engine (should have done that earlier, I know).

I ran a 10 second search over 250 random test positions to get an idea at which position in the list a move causes a cutoff. The earlier the better. Here are my results from those 250 positions.

Total recorded cut offs:         251,607,401
CutOff with 1st move:                 95.368%
CutOff with 2nd move:                  2.519%
Avg. move no that causes cut off:      1.174
CutOff in remaining moves*:        8,643,777   (3.43%)
CutOff move no in remaining moves:     3.022
Cutoff with a losing capture:      1,558,615   (0.62%)


*after hash, captures and killers

This analyses shows that in more than 95% the cutoff happens with the first move that is searched, which is an excellent number. Killer moves and captures also lead to early cutoffs, so in only a bit more 3% of all nodes where a cutoff is possible the quiet moves have to be generated and searched at all.

And here without doing history heuristics or piece square stuff in average the 3rd move in the list gets us a cutoff. I do however look at the killer moves from 2 plys above and 2 plys below the current node and order those moves first in the list if they are legal in thsi position. This seems to help a little bit to improve the order and I have this data anyway available.

After seeing those numbers I don't think I will put more efforts in this area in the near future because the order quality seems quite good already and the slow down in execution to get a better ordering in 3% of cut nodes will probably not pay off.

Thursday, August 11, 2011

Improve Move Ordering for Alpha Beta

The ice engine uses an alpha beta based search, that means if we search possible moves and find one that gives us a position so good, that our opponent will not play the moves that lead to it (because he can reach one that is better for him), we can stop searching further. This is called a cut off (sometimes also called beta cutoff or fail high)

This reduces the search tree. The effective reduction depends on the move order. The earlier I find a move that leads to a "to good to be true" position the more moves I can skip searching. So a simple rule can be formulated that says


Always search the best move first.

This rule is very simple but not 100% accurate, because we don't want to search the best move first, but we want to search a move that gives us a cut off with the least required effort. If the 2nd best move or even the worst move gives us also a cutoff but we only have to search half of the nodes compared to the best move, then we should search this move first obviously.

Always search the move that gives you a cutoff with the least required effort first.

Unfortunately it is not trivial to know it advance what moves give you a cutoff and what is their related effort without searching them. But it is not that bad. In a chess engine various mechanisms exist to help the engine to decide which moves should be searched first.

  1. There is the hash move. When a position was searched earlier (with a lower depth) the 1st move that gives you a cutoff was recorded. When this position is searched again with a higher depth, this move is tried first, often it still gives you a cutoff.
  2. If you don't have a hash move (you see the position for the first time or last time no move produced a cutoff) or the hash move fails this time, then you look for winning captures. Often they are good.
  3. Next you look for equal captures, maybe you get a cutoff here.
  4. Then you look for moves that caused a cutoff in similar positions (so called killers). First you have to look whether those killers are legal on the current board and if they are you search them. Good chance for a cutoff with them.
  5. Then I try losing captures, most engines search losing captures last and claim that is best for them. I search them before the quiet moves for three reasons. Sometimes a losing capture is not really losing but gives you enough counter play. If it is is really bad than the efforts searching it are small. And last I already have the losing captures generated while generating all captures. When they produce a cutoff I save the generation of all the quiet moves. In average it pays out for me.
Up to this point most engines behave similar (except where they order the losing captures). In 90% of the positions where you get a cutoff, you have got the cutoff with one of the moves above. So when you get here chances are none of the remaining moves will get you a cutoff, nevertheless you can't rely on that. You have to search them and sometimes they surprise you.

So you look for a mechanism to order all the remaining quiet (boring) moves or you decide to search them randomly the way they were generated. Here a lot is tried in different engines but is a bit of black art.

Commonly used is
  • Generate the moves in a meaningful order (like casteling moves first, than knight moves, bishop moves, rook moves, queen moves, king moves, pawn moves) and maybe also moves that advance before moves that retreat and just search them in that order
  • Use a table for every piece and square (called a piece - square - table) to mark good and bad squares for each piece (like knights get a bad score at the edge and a good score in the center). Search moves first where pieces go from bad squares to good squares. This sounds logic and really helps to order the moves in positions you expect to see on a chess board. Unfortunately in a search tree most of the positions are very strange resulting from a strange sequence of moves and applying a common sense move order must not necessarily help (some state it does, other say it does not)
  • Use a technique called history heuristics. Every time in search a move produces a cut off increment a counter for that move (moves searched with higher depth increase the counter more than moves from lower depth searches). Order then your moves according to the value of their counter. This certainly helped when hardware was slow and engines did not search so deeply because the positions in the tree had still some properties in common. Today engines search very deeply and positions after 8 moves of white and a 8 moves of black don't have much in common anymore. So using information from one position in another totally different position must not necessarily be a good thing to do.   
History heuristic is still very commonly used and programmers state it makes their engine stronger. To come up with such a conclusion you must run a lot of games (several 100 at least) against other engines with and without that feature used. Often those games use very short time controls so the testing does not take weeks. Short time controls lead to shallower searches and here history heuristic might help (as it did on slow hardware). So maybe it is still around because the testing framework (with short time controls) of the engine programmers favors it. I tried it at longer time controls and it did not help my engine so I don't use it.

I recently tried the piece square approach. To generate the tables I run longer searches against a lot of positions (more than 1000) and recorded which moves where generated how often, which produced a cut off, which did not and what was the effort required to search them. I also collected information about the pieces that were attacked after that move (if the king was attacked it was a checking move) and about the pieces that were defended by that move. So I generated a database with move heuristics.

The idea was to use the data in that database to generate a rule set or tables to help in the decision process of what move should be tried next.

The data allowed to say something like

A white queen that moves to e7 and attacks a black knight produces a cutoff in 10% of the cases and requires in average 500 nodes to be searched.
A white rook that moves to b5 and gives check produces a cutoff in 90% of the cases and requires in average 5000 nodes to be searched.

So if you encounter a move list that contains only those two moves, the queen move should be search first. (queen move first gives you 5000 nodes in total, rook move first gives you 5050 nodes in total).
but usually the moves lists are longer than 2 moves and it is really difficult to determine a least cost order for the moves even if you have all the data available.

I spent a lot of time with it and I did not find a schema that finally made the engine stronger. It solved some test positions faster but in real play I did not notice a difference.

So for the moment I give up on that idea and go back to a simple ordering schema for the remaining moves. This is maybe not better but as long as it is not worse it helps because it is definitely faster than anything else.

Saturday, July 30, 2011

Quiescence Search Trouble

While cleaning up my code base I realized that the quiescence search (qSearch) behaved not the way I intended. In qSearch  I don't want to search captures that lose material. I create all captures for a given position and perform a static exchange evaluation (SEE) on them. If the played exchange reveals that the capture is losing I delete that capture from the list.

But my implementation had an error.

for (i=0; i < ml->moveCnt; i++)
{
   if ((see(ml->moves[i]) < 0)) ml->deleteElementAt(i);
}
   
This code is problematic because the deleteElementAt(idx) function performs the deletion by copying the last element onto the position idx and decrementing the moveCnt property.

So the last element from the list is now at the position of the losing capture, but the move counter i is incremented after that operation and so this element is not checked whether it is maybe also losing. This way the resulting list might include losing captures which makes the qSearch search more moves and therefore nodes than intended.

I fixed that error but to my surprise the fixed version played weaker than the buggy one. Obviously the inclusion of some losing captures (which are selected more or less random) let the engine see some tactics that it otherwise missed and this effect more than countered the effect of the additional nodes.

I run tests where the fixed and the buggy version reported different results and tried to figure out what losing captures might me worth searching but I did not came to a good rule for that. Also searching all losing captures gave a weaker engine.

But I was not willing to keep the buggy code in the engine. So I did a complete rewrite of qSearch and also improved the static exchange evaluation to report more accurate results (it is now aware of king pinned pieces). This version now is a strong as the buggy one, but gives me a better feeling.


So this was obviously the rare exception to the "bugs hurt playing strength" rule. 

Tuesday, July 12, 2011

Probabilties and bad estimates

I'm somehow not good in estimating probabilities. I recently implemented some more caching in the engine and for space reasons I only saved part of the 64 bit board identifier.

I thought that using 2^20 hash slots which also verify 20 bits of the key just by the slot no and 20 more bits from the key are enough to safely detect collisions, when 2 different boards map to the hash same slot. But I was wrong. With my first verification run with a 12 ply search against 25 positions an undetected collision crashed the engine as it returned a move from another board, that was not legal on the actual board. It tried to capture its own pawn with a rook.


I knew there was a distant probability that this can happen (so like once in a thousands years), but as it happened so fast I must have greatly misjudged the probability for that.

Looks like I need additional collision detection code for that.

Monday, July 4, 2011

New CCC Tournament

As ice is public now it is sometimes chosen by engine testers in their tournament setup. It usually plays in the lower or lowest league but it's interesting for me anyway to see how it performs.

In the last special tournament it played against a lot of engines rated higher than itself (ice has about 2200 ELO) but it played pretty well. It finished 2nd place out of 16 engines and gained 50 ELO in that tournament. This makes me a bit proud and encourages me to enhance it further.

Final Standings

22.0 - Jazz 444
21.0 - Ice 0.1
19.5 - BikJump 2.01
19.0 - FireFly 2.5.10
18.0 - KMTChess 1.2.1 32-bit
17.5 - ECE 11.01
16.5 - Protej 0.5.7
15.0 - Prophet 2.0
14.5 - Carballo 0.5 32-bit
14.0 - Chesley r323
13.5 - Kurt 0.9.2 32-bit
13.0 - Parrot 07.07.22
13.0 - Clarabit 1.00
8.0 - PLP 1661571
8.0 - Bubble 1.5
7.5 - Jabba 1.0


http://talkchess.com/forum/viewtopic.php?topic_view=threads&p=410401&t=39210

Wednesday, June 8, 2011

ICC compilation revealed serious bug

Usually I write the source and debug the code with MSVC and compile my release code with mingw and the gnu compiler, currently with version 4.5.2. And it worked pretty well so far.

To try whether the Intel compiler ICC is able to produce better code I installed a demo version and ran it. It compiled without error but the resulting executable behaved very strange. It reached a high search depth very fast but examined only a small fraction of the nodes the gcc compiled executable did.

Wow, what an optimizer ! Unfortunately not. The resulting engine played much worse despite its high search depths. So the engine contained a bug.

This is nasty, because I debug with MSVC and there the bug does not show up, also gcc compiled fine and to my surprise ICC compiled a correct but slow engine when using the /Od (no optimization) compiler flag.

I then introduced debug code to see where the different version make different decisions and it showed that the ICC optimized code messed up the generated move lists when sorting them (good moves first ...). Most of the moves just vanished and so it had a low branching factor in search and reached high depths but as it skipped a lot of moves played terrible chess.

The root cause of this error was a singe line in the sort mechanism. I use a variation of insert-sort to sort the moves and this algorithm requires to make room when it finds the right spot for an element. To do that quickly I use memcpy instead of using a dedicated loop that moves every element one slot to the right.


   memcpy (&moves[j + 1], &moves[j], (i - j) * sizeof (TMove));


This worked great until the Intel compiler optimized that line. I looked the command up in the c++ documentation and was surprised to read that the behavior of memcpy is undefined when source and destination overlap (which they do in this case).

So I just replaced memcpy with memmove, the error was gone and I did learn an interesting lesson.

Monday, May 23, 2011

Draw recognizer with pawns involved

I try to build a endgame draw recognizer for all 4 man endgame positions that are not easily evaluated. So I don't build one for king and 2 rooks against king as this make no sense. The engine will mate the enemy king and avoid stalemate traps without any additional help.

King and bishop vs king and pawn is more interesting. Usually it is a draw but there are quite a few winning positions for the side with the pawn and even a few for the side with the bishop.

I verify the recognizer by running all possible positions through it and verify its decision (draw / not draw) against a table base. This verification process takes much longer when pawns are involved as I have to test much mor positions. Without pawns I only test 10 squares for the white king, all other positions can be matched by mirroring or flipping to one of the 10 tested squares. With pawns that does not work, so I have to test 32 squares for the white king and so it takes three times as long.


So the whole recognizer implementation takes quite some time and I looking forward when it is done so I can move to more interesting stuff.

Sunday, May 8, 2011

More on drawish endgames

After implementing a draw recognizer for the KRKR endgame I decided to implement a recognizer for the endgame of rook vs. knight and rook vs. bishop. I thought it is easier because only the side with the rook has winning chances but in fact it turned out to be much more difficult.

Even in the rook vs knight endgame there are a few positions where the side with knight wins.
 Black to move - Mate in 1

And in both endgames there are positions that look like a draw (you cannot easy see a forced win) but are in fact a forced win. In the knight vs. rook endgame some wins require up to 40 moves.

So at the end I implemented the draw recognizer but in order to not let them report false draws they rule out a lot of stuff and miss a lot of draw positions. The worst is knight vs. rook, bishop vs. rook works a bit better.

Anyway it helps the engine to play those endgames a lot

White moves and Mates in 29! (Ke1 !)
In this position it finds the winning move at ply 4 instantly. It sees the mate at ply 31 after 55 seconds.


position fen 8/8/8/8/8/2R5/8/3K1bk1 w - - 0 1
go depth 31
info depth 1 seldepth 2 time 0 nodes 37 pv d1d2  nps 36999 score cp 125 hashfull 0 tbhits 0
info depth 2 seldepth 3 time 0 nodes 95 pv d1d2 g1f2  nps 94999 score cp 110 hashfull 0 tbhits 1
info depth 3 seldepth 5 time 0 nodes 470 pv d1d2 g1f2 c3c7  nps 469999 score cp 110 hashfull 0 tbhits 15
info depth 4 seldepth 6 time 0 nodes 1141 pv d1e1 f1g2 c3g3 g1h2  nps 1140999 score cp 120 hashfull 0 tbhits 65

..
info depth 31 seldepth 47 time 11797 nodes 9152530 pv d1e1 g1g2 c3c2 g2g1 c2c6 f1b5 c6g6 g1h2 e1f2 h2h3 f2f3 h3h4 f3f4 h4h3 g6g3 h3h2 f4f3  nps 775835 score mate 29 hashfull 57 tbhits 77627

Wednesday, April 27, 2011

Improving the chess endgame knowledge

While monitoring games the engine plays I realized that often basically won games result in a draw because the engine does not understand when a endgame is a draw, e.g. exchanging the last pawn to a king and rook vs king and knight endgame is usually bad for the side ahead, as this endgame is likely a draw although one side being clearly ahead in material.

I was aware of that lack of knowledge but underestimated its impact in actual games. This issue is fixed when table bases are available and the engine is able to access them, because then it will recognize those positions are draw from the table base score, but I don't want to create a table base dependency for the engine that has a significant impact on playing strength.

So I decided to implement some more endgame knowledge in the engine to recognize also some non trivial positions as draw. I plan to use that knowledge not only in eval but also in search. This means when somewhere in the tree a position is recognized as DRAW search will stop evaluating this node even when not at the horizon.

For the moment I focus on positions with 4 pieces. 3 pieces are covered through internal table bases already. The easy stuff is when no site has enough mating material yet e.g. King and Bishop vs. King and Knight. Here we can just return a DRAW score.

The next thing is a King and Rook vs. King and Rook endgame. This endgame is perceived as DRAW but it is not possible just to return a DRAW score. There are positions that are forced wins or losses and if they are not handled properly this will introduce severe errors.

To take care of that I thought of implementing patterns that when they appear on the board indicate that this position is likely not a DRAW. One example of an easy pattern is where the side to move is able to capture the undefended enemy piece next turn. You can also think of more advanced pattern where the side to move is able to check the king where then the king is pinned to its piece. So I assembled the patterns I could think of and thought I have a good DRAW detection for the KRKR game.

I was wrong.

Not because I thought there is a problem with my patterns, more out of curiosity I implemented a validation routine that runs through all possible KRKP positions and calls my new isDraw() method for them. If isDraw() retruns true the score for that position is looked up in my KRKR table base and verified if it really is a DRAW. It was quite a disaster. I had errors over errors there.

There are non trivial forced wins in a KRKR game I did just not think of. Consider the next diagram, those patterns are likely to be missed

Black moves and Mates in 7!
So I reworked my patterns with the help of the table bases unless I did not falsely classify positions as draw anymore. In some cases the patterns indicate a possible win/loss which in fact is a DRAW but this is not so severe as search will just continue in that case and not introduce errors. 

At the end I came up with 11 patterns that together are able to recognize about 75% of all DRAW positions. For the moment this s good enough.

Incorrect Draws : 0
Spotted   Draws : 11.547.632
Missed    Draws : 3.590.432
Spotted   Wins/Loss  : 6.423.392

I was also surprised by the high number of positions with forced wins and losses. Quite high for an endgame that is perceived as DRAW.

Monday, April 11, 2011

Passed the tournament entry test

The ice has successfully passed the tournament entry test consisting of a test tournament between new engines. It completed all games without any bug, errors or warnings (e.g. loss on time).

Its playing strength still leaves much room for improvement.

 It finished 14th ot of 20 participants

Test at Dual Opteron 244 at 40/5, ponder=on
OPTERON-A, 2011.04.02 - 2011.04.05

                              Score     Ha Di Cu Fr Re Ph Al Ja If Be Fi Sj Do iC Ze Ca Ch Ja Ch Ch
----------------------------------------------------------------------------------------------------
1: Hannibal 1.0a           37.5 / 38   XX 1= 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
2: Dirty 23032011-x64      34.0 / 38   0= XX == 01 1= 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
3: CuckooChess 1.09-JA     31.5 / 38   00 == XX == 1= 01 =1 11 11 1= 11 11 11 11 11 11 11 11 11 11
4: FranMAD 0.18            30.0 / 38   00 10 == XX 01 01 10 01 11 11 11 11 11 11 11 11 11 11 11 11
5: RedQueen 0.9.5-JA       29.5 / 38   00 0= 0= 10 XX 10 01 =1 11 11 11 11 11 11 11 11 11 11 11 11
6: Phalanx Reborn          28.5 / 38   00 00 10 10 01 XX 11 1= 11 =1 11 1= 01 11 11 11 11 11 11 11
7: AliChess 4.25           26.0 / 38   00 00 =0 01 10 00 XX 11 11 11 10 =1 11 =1 11 1= 11 11 11 11
8: Jazz 4.4.4wb-x64-JA     21.5 / 38   00 00 00 10 =0 0= 00 XX 0= =1 01 0= 11 11 11 11 11 11 11 11
9: Ifrit j4.3-x64-JA       19.5 / 38   00 00 00 00 00 00 00 1= XX =1 1= 10 01 =1 11 11 1= 11 11 11
10: BetsabeII 1.0-ja        18.5 / 38   00 00 0= 00 00 =0 00 =0 =0 XX 10 11 11 0= 10 11 11 11 11 11
11: FireFly 2.59b1-x64      18.0 / 38   00 00 00 00 00 00 01 10 0= 01 XX 11 10 11 10 11 =1 10 11 11
12: Sjakk 1.1.7             15.5 / 38   00 00 00 00 00 0= =0 1= 01 00 00 XX 00 11 11 1= =1 01 11 11
13: Dolphin 1.04            15.0 / 38   00 00 00 00 00 10 00 00 10 00 01 11 XX 10 11 00 11 10 11 11
14: iCE 0.1-b1120           14.0 / 38   00 00 00 00 00 00 =0 00 =0 1= 00 00 01 XX 01 1= 11 11 11 11
15: ZetaDva 0.1.5.3-x64-JA  11.0 / 38   00 00 00 00 00 00 00 00 00 01 01 00 00 10 XX 1= 01 =1 11 11
16: Capivara LK 0.07s02     10.5 / 38   00 00 00 00 00 00 0= 00 00 00 00 0= 11 0= 0= XX 01 =1 11 11
17: ChessKISS 1.0            9.0 / 38   00 00 00 00 00 00 00 00 0= 00 =0 =0 00 00 10 10 XX 1= 11 11
18: JaksaH 1.11-x64          8.5 / 38   00 00 00 00 00 00 00 00 00 00 01 10 01 00 =0 =0 0= XX 11 11
19: ChessBunny 1.0_disk      1.0 / 38   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 XX ==
20: Chad'sChess 0.15         1.0 / 38   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 == XX
----------------------------------------------------------------------------------------------------
380 games: +175 =34 -171



Friday, April 1, 2011

Public tournament participation

The first release of the iCE engine went public. It took me about two years to develop it, or better say to evolve it. I had a chess playing program (not an engine) after a few weeks but this first version and the now released one have not much in common anymore. OK, both were Shannon Type A based somehow (not that I knew what Shannon Type A means, when I started). But that's about it.

So now its time to let go and I requested to participate with iCE in its first public tournament. I think it is still pretty weak and I hope it does not finish last or I'm really embarrassed. But at least I then have a baseline where I can develop from.

Saturday, March 26, 2011

The 1st release of both engines is done

Today I published both engines (mACE and iCE) to my homepage, so whoever wants to give them I try can now do so.

I also started a little tutorial about chess programming there, so who is interested might have a look at
http://www.fam-petzke.de/chess_home_en.shtml.

Thursday, March 17, 2011

About to release the 1st public version

After the last tournament where the ice engine did not so bad, I decided to take a little rest from the core engine programming. I plan to make the engine available for download via my private web site and have to do some preparation for that first. I will post the first public release of my engine as soon as this is finished.

Stay tuned !

Sunday, March 6, 2011

Small private tournament

I felt it is time to match the engine against some other engines to see how it performs and also to have a baseline to measure progress over time. I ran a small tournament of 6 engines. I took some of the free engines available on the net and tried to avoid ones known to be very strong. There is no point in proofing that ice is still a weak engine.

The engines I selected were BB Chess, Delphil2_6, Kanguruh93, Mediocre and Vicki.

Ice finished 3rd place scoring 9,5 points in 15 games. Seems that ice plays somewhere in the range of those engines and is not a total looser anymore. I'm a little proud and feel the days come closer where I'm able to make it available for download.

Here are the tournament details

Arena tournament
RankEngineScoreDeMeIcBBViKaS-B
1Delphil2_6a 12,0/15· ··110100111111111 69,50 
2Mediocre 10,5/15001· ··11=001111111 57,75 
3Ice Latest 9,5/1501100=· ··101==1111 55,25 
4BBChess0_99b8,5/15000110010· ··1=1111 41,75 
5Vicki 4,5/15000000==00=0· ··111 13,75 
6Kanguruh93 0,0/15000000000000000· ·· 0,00 











45 games played / Tournament is finished

 Tournament start: 2011.03.05, 23:56:04
Latest update: 2011.03.06, 10:20:49
Site/ Country: Germany
Level: Tournament 40/5
Hardware: Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHzDual 2394 MHz with 1.992 MB Memory
Operating system: Microsoft Windows XP Professional Service Pack 3 (Build 2600)
Table created with: Arena 2.0.1

Saturday, February 26, 2011

Another bug hunting session

As mentioned in an earlier post I develop the ice engine with MSVC and compile the release versions with the gnu c++ compiler as it creates the faster executable. At some point in time I realized that both executables compiled from the same source produced different outcomes.

When searching for a move suddenly the number of processed nodes was different, different pvs were calculated and sometimes also different best moves were returned. So I knew there was a bug some where but it was very hard to spot as both programs just for themselves appeared to run fine.

I reversed recent code changes to spot the source of the error but it did not help, both versions behaved differently, so I started the bug hunting session. Errors produced by the search algorithm are very hard to spot because the algorithm is recursive and calls itself several thousand times a second. I finally was able to spot the last position were both programs were equal, so I knew this position was somehow the cause of the problem. It showed that eval returned in both executables different scores for that position. From now on it was easy as eval is not recursive.

The root cause was an array with pawn bonuses per rank. The array was sized too small. In some positions the access to that array just exceeded the bounds of the array and retrieved a random number that just happened to be at the memory position. This random number was not the same in the MSVC and the gnu version and so the execution started to run differently.

This was then a quick fix and now both versions are consistent again.

I ran a quick tournament (50 games) of the latest ice version against a former not too old release to measure whether recent changes improved the strength of the engine. And the latest release scored
38 : 12
So although I consider the engine rather weak at least there seems to be some progress.

Tuesday, February 15, 2011

Chess engine input handling

When designing the ice engine I decided to design it as a single threaded program to save the thread handling overhead.

In the pascal mace engine I use threads for the input handling. I read from stdin, when no input is available the execution stops. This is unfortunate if this is your main thread, it is no problem when it is an extra thread created just for that purpose. In ice I wanted to use PEEK_NAMED_PIPE to look whether input is available and only start reading if it signals that input is there, so the execution is not blocked by reading from an empty input device.

It worked most of the time, it failed when I gave the console window, where the engine is running in, focus or removed focus. Although I specified that I don't want to have window events PEEK_NAMED_PIPE returned true in those cases, it tried then to read from stdin and execution stopped as no real input was available.

This only happens when I run the engine in a console window not when the engine is running as a process started by the GUI (try to give focus to a process ...) but it is annoying anyway.

I returned to the thread method for handling stdin input. I use the standard methods from Windows to do that and the performance hit I'm taking seems minimal.

Wednesday, February 2, 2011

Next to last mistake

Victory goes to the player who makes the next-to-last mistake.
Savielly Grigorievitch Tartakower, Chess Master

While working on the engine evaluation I thought it might be helpful for me to play some games against other real players to get some fresh ideas. It's been a while since I did that.

I went to yahoo chess and played a few games. As a noob I was given a low provisional score and engaged games with players that had a comparable score. To my surprise there were players with a low score like me that played already thousands sometimes more than ten thousands games. I don't think I played ten thousand games in my whole life and if I had I would probably have a better score.

Anyway the last game was a real challenge for me. My opponent played very aggressive and I was somehow intimidated by that. I thought I was screwed but I just wasn't able to find the best moves. I performed an analysis of the game after wards with the help of my engine and it showed that we both played terrible. His gambit offers were incorrect so was my declining, I overlooked easy tactical combination's.

Here is the game (I'm playing White)

1.  e2-e4 e7-e5
2.  g1-f3 b8-c6
3.  f1-c4 g8-f6
4.  d2-d3 f8-c5
5.  o-o   f6-g4
6.  d1-e2 d7-d6
7.  h2-h3 h7-h5

I thought it is too dangerous to accept the gambit and take the knight, because of the open h-file with blacks rook on it. But after 9. Ng5 it would probably be ok to take it. Instead
 
8.  b1-c3 c8-e6
9.  c3-d5 c6-d4
10. e2-d2 d4xf3+
11. g2xf3 d8-h4
12. d2-g5 

Black is too greedy
12. ... h4xh3
13. g5-e7++

At the end he was so occupied with his assault that he overlooked the Mate in 1 threat and this one time I was aware of it, so I won luckily. But I take this a s a warning, I need more discipline. Calculate better and consider all alternatives. Play a bit like my engine plays.
 


  

Monday, January 31, 2011

Evaluation evaluation

Last time I thought about the evaluation part of the engine. This part is responsible for returning an absolute score for a certain position. Here all the chess specific knowledge is contained. The engine must be able to see whether a certain position is in favor for white or black, so it knows which position it should avoid and which positions it should play towards to.

Easy evaluation terms are material based. The side with more material has an advantage. The bigger the material difference the higher the advantage. But even material becomes difficult when you try to fine tune that.

Example question: What is better, a queen or two rooks ?

The answer depends on the other material on the board, if there are a lot of pawns on the board, the queen might be superior because the rook paths are blocked. If there are only a few paws on the board the rooks gain a lot of strength.

A queen that is not opposed by a bishop pair is stronger than one were the opponent still possess the bishop pair.

So to tune the evaluation is probably the must difficult part and also the part the engine gets its personality from. What focus I give on certain terms will determine the engine play, so it is quite an interesting subject.

For the programming I maintain a header file with all the evaluation terms and scores associated. This is very time consuming to maintain especially if you want to have a nice code formatting.

I started to maintain the evaluation terms in a Excel and wrote a Visual Basic macro that exports the terms into c++ code. This is really helpful.

.

Monday, January 24, 2011

Including binary chess endgame tables into ice executable

In order the include basic functionality into the new c++ based ice chess engine I moved to basic endgame play. Basic endgame means that the engine should understand when a 3 man endgame is a draw, when it is a win and how to play it correctly. The most accurate way is to make endgame tables for KPK, KRK and KQK available to the engine.

From my former work with endgame table for the old mace engine I calculated all those tables and it was time to use them for ice also.

Because this knowledge is so essential and the engine relies on having it available I decided to include the tables into the binary executable and not to distribute them as separate files along with the engine.

To do that I added a function to my table base generator that generates c++ header files that include the contents of the table.The generated file looked like this.

#ifndef KPK
#define KPK

/**************************************************************
| Chess Endgame Table Base Data for Endgame Type : KPK        |
|-------------------------------------------------------------|
| created by table base creator 1.0                           |
| (c) by Thomas Petzke                                        |
| 23.01.2011 18:18                                            |
|**************************************************************/

const int KPK_TBL_SIZE = 79865;
const short KPK_TBL[KPK_TBL_SIZE] = {
-4080,
-4080,
-3968,
-3968,
-3825,
-4080,
-4080,
-4080,
...

I estimated that the executable growths by a few 100k in size when this tables are included. To my surprise the gnu c compiler added almost 2MB to the size of the executable, so much more than necessary for the raw data. Obviously he aligns the array content for faster access so each element takes 8 bytes (instead of the 2 bytes of a short).

As speed is not really a problem here, the array is read only once when the program initializes, so when this takes a few ms longer no one cares, I tried to shrink the size of the executable.

I packed 4 shorts into an 64bit integer, but the size remained the same. Also using pre processor directives did not help. So I minimized the array size by not including illegal positions and easy to recognize draws in it. This helped a bit but the executable is still larger than it should be.

As it is more a cosmetic problem I decided to live with it for the moment. At least it works correctly :-)

White moves and mates in 23 

iCE 0.1 build 511 [2011.1.24]
position fen 8/8/8/8/8/6k1/4P3/2K5 w - - 0 1
go depth 1
info depth 1 seldepth 0 time 0 nodes 7 pv c1c2  nps 6999 score mate 23 hashfull 0 tbhits 7
bestmove c1c2 

Thursday, January 6, 2011

Solving the FINE70 position

So after fixing the last bug (see last post) the engine solves to FINE70 position correctly again. Here is the engine protocol about it for reference

iCE 0.1 build 333 [2011.1.6]
fine70
go 26 ply at FINE70 = 8/k7/3p4/p2P1p2/P2P1P2/8/8/K7 w - -
info depth 1 seldepth 1 time 0 nodes 6 pv a1b1  nps 5999 score cp 97
info depth 2 seldepth 2 time 0 nodes 17 pv a1b1 a7b6  nps 16999 score cp 97
info depth 3 seldepth 3 time 0 nodes 68 pv a1b2 a7b6 b2c3  nps 67999 score cp 99
info depth 4 seldepth 4 time 0 nodes 95 pv a1b2 a7b6 b2c3 b6c7  nps 94999 score cp 99
info depth 5 seldepth 5 time 0 nodes 178 pv a1b2 a7b6 b2c3 b6c7 c3c4  nps 177999 score cp 99
info depth 6 seldepth 6 time 0 nodes 216 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7d7  nps 215999 score cp 99
info depth 7 seldepth 7 time 0 nodes 310 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7d7 c4d3 nps 309999 score cp 99
info depth 8 seldepth 9 time 0 nodes 413 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 b6c7 nps 412999 score cp 99
info depth 9 seldepth 9 time 0 nodes 477 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 b6c7 d3c4  nps 476999 score cp 99
info depth 10 seldepth 11 time 0 nodes 907 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 906999 score cp 99
info depth 11 seldepth 11 time 0 nodes 764 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 763999 score cp 99
info depth 12 seldepth 13 time 0 nodes 1252 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 1251999 score cp 99
info depth 13 seldepth 13 time 0 nodes 1337 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 1336999 score cp 99
info depth 14 seldepth 15 time 16 nodes 1790 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 111874 score cp 99
info depth 15 seldepth 16 time 0 nodes 1771 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 1770999 score cp 99
info depth 16 seldepth 18 time 15 nodes 2284 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 152266 score cp 99
info depth 17 seldepth 18 time 16 nodes 2582 pv a1b2 a7b6 b2c3 b6c7 c3c4 c7b6 c4d3 nps 161374 score cp 99
info depth 18 seldepth 21 time 62 nodes 20832 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 nps 336000 score cp 99
info depth 19 seldepth 20 time 0 nodes 2508 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 nps 2507999 score cp 99
info depth 20 seldepth 22 time 0 nodes 3554 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 nps 3553999 score cp 99
info depth 21 seldepth 24 time 16 nodes 4393 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 nps 274562 score cp 99
info depth 22 seldepth 25 time 31 nodes 5572 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 nps 179741 score cp 99
info depth 23 seldepth 26 time 0 nodes 6317 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 b6c7 nps 6316999 score cp 99
info depth 24 seldepth 28 time 16 nodes 7638 pv a1b2 a7a8 b2c3 a8b7 c3c4 b7b6 c4d3 b6c7 nps 477374 score cp 99
info depth 25 seldepth 30 time 78 nodes 72402 pv a1b1 a7b7 b1c1 b7c8 c1d2 c8d7 d2c3 d7c7 nps 928230 score cp 199
info depth 26 seldepth 31 time 63 nodes 45329 pv a1b1 a7b7 b1c1 b7c8 c1d2 c8d7 d2c3 d7c7 nps 719507 score cp 199
bestmove a1b1

There is always one more bug

Today I ran into an extremely annoying bug. I found it when I was doing a stress test against my transposition table. I do that once in a while to see that recent changes did not break anything.

Usually I run a ply 26 search against the position known as FINE70 (or the Lasker-Reichhelm Position)
8/k7/3p4/p2P1p2/P2P1P2/8/8/K7 w - -
The only winning moves is Kb1!. All other moves lead to draw. With a correct implemented transposition table the position is solved in about 1 sec or less, without a transposition table or with errors in it it takes much longer or is not solved at all.

The last test revealed that the ice engine was not able to solve it anymore. It ran quite fast through the plys but at ply 26 it showed Kb2 as best move.

So the fun began, trying to find an error in the transposition table. I found several bugs where I used & in a logical expression where it should read && but this did not solve the problem.

The last thing I added was considering piece square values in the evaluation and indeed when I removed that code the position was solved. But the code is really simple, I could not imagine I did anything totally wrong here, I debugged it anyway but to no result. It worked as it should just when it added a small value to the score less than 1/10 of a pawn the position was not solved anymore.
At the end I turned the hash table off and let the engine run without it (it calculated over an hour) and to my surprise the position wasn't solved either. So obviously it was not a hash table problem. So I went to the PVS search, turned off extensions, quiescence search, LMR, zero window search in PVS, everything I could think off. Trying to debug a 26 ply search in a recursive algorithm is really mind boggeling. I wasn't able to spot any error, with every feature alone or all together turned off, the position was still not solved.

I finally started to collect the principle variations at ply 25 to see what happens to the winning move line and to my surprise at ply 25 no principle variation with an exact score that contained the winning move Kb1 was encountered. It seemed that this move and so no line starting with it was existing in the search tree, which was quite odd. But this revealed the most likely source of the error to me. It had to be in the code where the root position is handled. Here for every legal move in the root position a PVS search is performed. (The root node is outside the normal search, because at the root I need more than the score of the position, here I need the best move also).

It showed that I did not run the search for all moves at the root, I skipped the last one, the code was
for (i=0; i < ml->moveCnt -1 ; i++)
It did not show as a problem before because the move list is usually well ordered and the last move will very unlikely turn out as the best. Adding the piece square evaluation to the the score changed the order for the moves in the list, Kb2 was considered better than Kb1 on lower plys (king closer in the center), this changed the order of the moves in the root position and Kb1 was moved to the end for searches from ply 4 onwards. Here it was not considered anymore because of the -1 bug.

So my head hurts from recent head banging but I'm glad I found that nasty little bug. It really bothers me knowing to have a bug somewhere in the code and no be able to find it.

So this one is fixed now, but I already encountered the next one. There seems to be a problem with the zero window search in PVS. There is always one more bug ...

Monday, January 3, 2011

Pros and Cons of MS Visual C++ 2010 Express

Since I was feeling that my Free Pascal based mACE engine executes to slowly I started the iCE engine which is using C++ as programming language. When I came to choose the actual compiler I decided to use MS Visual C++ 2010 Express Edition.

After a few weeks of coding with it here is my very personal opinion about its Pros and Cons

Pros:
  1. It's free. (I don't want to spend money for a little spare time project)
  2. It has a pleasing ergonomic usage view and feeling. (MS always knew how to make things look nice)
  3. It allows a fast start into programming. Just type a little "hello world" program into the editor and click the run button.
  4. It has an extremely good on the fly syntax checking, called IntelliSense. While you type your code it checks it for syntax correctness. Things like type mismatches, missing brackets, misspelled type names, missing ; or missing includes are spotted and marked (like in MS Word) while you type. This is really impressing and gives you a huge boost in coding performance. When you build your project your code is already pretty good and your are not flooded with countless error messages because of a missing closing bracket somewhere.
  5. It has a powerful integrated debugger, it gives you an auto pane where it automatically displays the variables and values of the just used scope. It is also able to interpret complex statements like (ESquare)((ml->moves[i] & MASK_FROM_SQ) >> SHIFT_FROM_SQ) and it can show you the contents of complex types behind pointers. Free Pascal just showed the memory address of the object which is quite useless and you had to write code to map your objects into local variables as the debugger was not able to display the object directly. 
  6. It supports the INTEL inline assembler style, the syntax is a little different from the one in Free Pascal but basically comparable. And you can start a assembler block with a _asm directive and just write assembler code between { }. When you see the GNU compiler assembler mess with assembler statements being C++ strings in an awkward AT&T style you will see what I mean. I would love to see the architectural decision that made the GNU guys decide to use a programming language from a Telco provider rather than from a processor manufacturer.  (I know the -masm=intel option in GNU, but the assembler syntax is still not the same as in Free Pascal or Visual C++).
  7. In Warning Level 4 it gives you good hints where you might have coded dirty causing you problems later on. It will spot things like if (value = 0), which is in almost all cases an error or usage of not initialized variables (useful when you have a lot of branches in your code and in one of the branches you access the variable without its initialization first).
This is a long list of Pros, longer than I expected, now to the Cons

  1. The Express Edition does not include a code profiler that allows you to find not optimized parts of your code, or just to give you information what code is spend the most time on, so you know where to optimize first. The Professional Edition shall include one (not verified by me) but this edition is not free so for my little non commercial project not affordable.
  2. The generated executable includes dependencies on some dll's so they don't run on other computers unless a redistributable framework is installed on them. There might be a code generation option for a runtime library like "Multithreaded-Debug-DLL" where the dependency  might not exist (not tested so far), but the pure existence of a default that generates an executable that does not run on another computer is annoying. 
  3. The generated code does not seem to be optimal. I use seem, because I might not have found all tweaks to make it faster. VC++ comes with 2 code generation defaults, a DEBUG Release and a RELEASE release. the RELEASE option generates code that is about twice as fast as the DEBUG option, so I assume a lot of optimization is turned on here. But the resulting code was still much slower than the code generated by my Free Pascal engine. Usually this is not a problem as computers today are so fast they run even unoptimized code fast enough, but when you program a chess engine execution speed is one of your main concerns (and it was the reason to start with a C++ engine at the beginning).
The last Con is real concern to me and for this little chess engine project. When the final code is slow it doesn't matter how nice and painless it was to write it. I decided to test another C++ compiler on my source code just to see whether I made mistakes in porting it from Pascal to C++.

I used g++ (the GNU C++ compiler) on my source. I had to make a few source code adjustemenst (like I defined my own INFINITY constant, VC++ was fine with that, g++ not, even with an own namespace). Most hassle went into the inline assembler functions for bitboard operations (finding the least signinficat bit) which did just not compile in g++ even with -masm=intel. I replaced the assembler code with C++ code like

#ifdef _MSC_VER
    __asm
    {
            ; use bsf to get the least significant bit
    }
#else
      if (B==0) return 64; else return __builtin_ctzll(B);
#endif

So it finally compiled. To my surprise the almost identical source code generated a much faster executable with g++. I did not expect such a huge difference.

Just to give an impression of the difference

VC++:
go depth 11

info depth 10 seldepth 29 time 42125 nodes 7924921 pv b4b7 f7b7 h5g6 h7g6 d8g8 g6f5 g8g4 f5e5 g4h5 f3f5 f2f4 h2f4 h5e2 d1e2 a4e4 d5e4 d3d4  nps 188128 score mate 9 hashfull 190 tbhits 0
 

info depth 11 seldepth 24 time 135391 nodes 38553528 pv b4b7 f7b7 h5g6 h7g6 d8g8 g6f5 g8g4 f5e5 g4h5 f3f5 f2f4 h2f4 h5e2 d1e2 a4e4 d5e4 d3d4  nps 284756 score mate 9 hashfull 132 tbhits 0
bestmove b4b7


g++:
go depth 11

info depth 10 seldepth 29 time 15203 nodes 7924921 pv b4b7 f7b7 h5g6 h7g6 d8g8 g6f5 g8g4 f5e5 g4h5 f3f5 f2f4 h2f4 h5e2 d1e2 a4e4 d5e4 d3d4  nps 521273 score mate 9 hashfull 190 tbhits 0
 

info depth 11 seldepth 24 time 54906 nodes 38553528 pv b4b7 f7b7 h5g6 h7g6 d8g8g6f5 g8g4 f5e5 g4h5 f3f5 f2f4 h2f4 h5e2 d1e2 a4e4 d5e4 d3d4  nps 702173 score mate 9 hashfull 132 tbhits 0
bestmove b4b7



So the g++ code processed 702.173 nodes per second where the VC++ only processed 284.756 nodes.


My personal outcome:
I will continue to write and debug the code with Visual C++ but for the release code generation I will use the GNU compiler instead.