I just add a simple pattern that might be missing or remove a pattern that might overlap with already existing ones.
This is usually a 5 minutes job of coding. The tricky part is to find out whether the change improves the engine or makes it weaker. As the changes are tiny the original and the patched engine are almost similar in strength, probably within a 5 ELO range.
The only way to get a feeling about the quality of the patch is to play an huge number of games.
I currently use the following test methodology.
- Code the change and test it for functional correctness. For evaluation I have a automated unit test that must be passed.
- Compile a release version of the engine.
- Setup a match using cutechess-cli as tournament manager. I use 4 parallel threads on my 4 core i7 and a huge balanced opening book where start postions are randomly selected. Each opening is played twice (each engine gets the chance to be white and black).
- I run 16.000 games and don't stop early.
- Feed the resulting match file into bayeselo to calculate the strength difference and statistical error bars.
- Draw the conclusion whether to keep or discard the change.
If a patch shows 9 ELO or more improvement the decision is easy. In some case I might even keep a 5 ELO patch if it simplifies the evaluation. This is a bit risky of course.
On the other hand changing something in the evaluation brings the evaluation a bit out of its former balance. So a change that gives -2 ELO might still be very good if the balance of the evaluation is restored again.
With that reasoning I accept also some changes that are not an improvement beyond doubt. They might show their real potential if I re-tune the evaluation in a later phase.
When I'm done with my todo list of small changes I will run the final version vs the first version (iCE 1.0). Hopefully here I see then an improvement beyond doubt. Fingers crossed.