Harpists spend 90 percent of their lives tuning their harps and 10 percent playing out of tune.
Igor Stravinsky
My computer just finished a second tuning run trying to optimize the search control parameters in my engine. This time it run for another 364 hours. I optimized the parameter set a bit, removed dead parameters (parameters belonging to a feature that the first run decided to disable) and added a few new ones. I also lowered the learning rate and modified the fitness function a bit.
Due to the lower learning rate the second run converged slower but as also fewer mutations occurred the final entropy after 1.100 generations was about the same.
Convergence comparison between both runs |
Average Population Search Depth for midgame positions (move 10 - 25) |
The convergence towards an aggressive set also shows in the search depth the winner engine reaches in the final round of the knock out tournament in each generation.
evol-2 tournament winner depth incl. trend line |
Futility Pruning Safety Margins |
ELO development (engine base version = 0) |
The outcome of the 2nd run seems a bit stronger. But considering the error bar they are really close together.
Rank Name Elo + - games score oppo. draws
1 evol-v2.1100 70 6 6 10004 54% 43 42%
2 evol-v1.1100 62 6 6 10004 52% 49 41%
3 iCE 0.4 v1604 0 7 7 6004 40% 66 40%
Maybe it is just coincidence or it is an indication how much the strength of my engine with the current code base can be improved just be tuning search parameters.
I will have to think about it and decide how to progress from here. Maybe I perform a third run to test my hypothesis.
No comments:
Post a Comment