- better evaluation (estimate the real value of a position more accurately)
- better search (try to search promising lines deeper and silly lines shallow)
- execution speed (process more nodes per second)
After adding the book which is more a technical must have I try now to improve the execution speed of the engine, squeezing out some more nodes per second.
One idea I tried was the caching of the attack patterns for rook and bishop movers. The idea is simple, if the attack pattern is calculated for a piece on a square, store the result pattern in a small array together with the relevant obstructions currently present on the board that formed that pattern.
If you later ask for the attack pattern of that piece on that square first look whether the board contains still the same relevant obstructions and if so just return the pattern from the small array. If not retrieve it from scratch and update the array.
Simple caching, cool idea and not hard to implement.
... and does not speed up the program at all.
In fact it slowed it down. The reason probably is that the magic bitboards I use are so fast that the caching overhead is more than the actual magic bitboard look-up it tries to avoid. As access to the main memory is usually the bottleneck I hoped for better results. But probably those accesses that produced a cache hit in my implementation would have produced a cache hit anyway, because the data was still in one of the CPU caches. So I discard those changes for now and go back to my old code base.
One idea is still left to try. If I need a queen attack pattern I currently use
queenAttacks(sq) = rookAttacks(sq) | bishopAttacks(sq)
so caching the queen attacks might safe more work and produce a gain at the end. But this requires more code changes throughout the whole program (I have no queenAttacks function today) so I will postpone that into the future.