Back with a whimper !

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mridul

Back with a whimper !

Post by mridul »

Hi all,

After more than a couple of years of hiatus, and a few years of not too much activity before that, I hope to be able to start on chess with some interest again.

Still reading up on the various ideas and techniques discussed here and elsewhere, but in meantime, I plan to :


a) Build something to test and train endgame using truth results [endgame tables as truth data for example].

Anyone has done something similar ?


b) I hate, always hated, manually tuning tables ... it has been inevitable until now for me.
Anything interesting been done/already done (I have gone through DT papers, and it never worked for me) on this front ?


Any other suggestions I should look at ?!


On a slightly related note - might need to upgrade my box to something 'powerful' : suggestions ?
And, any idea which is the next tourny ? I need some time to write up something decent and tested ... but would love to join for the heck of it !

Thanks in advance and glad to be back :-)
Looking forward to meeting all the old folks, and the new innovators !

Regards,
Mridul
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Back with a whimper !

Post by Gerd Isenberg »

Hi Mridul,
welcome back!

I still love CC without those winboard or uci stuff, to become a slave of test procedures ;-)

Bit-twiddling, Fill- and SIMD-stuff, dot-products in eval and playing with ANNs.

Best,
Gerd
Dann Corbit
Posts: 12541
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Back with a whimper !

Post by Dann Corbit »

For learning, most people use td-lambda or td-leaf. KnightCap is a sample engine that used learning for improvement of evaluation.

I have a procedure I like to use to find a good starting point for evaluation terms:

Fit a parabolic curve through data points.

It is easy to use this technique to create the strongest possible tactical engine. It is very difficult to use this technique to create a strong game playing engine. But the tactical constants are probably useful as a starting point.

There is also the famous book and position learning.

However, the caveat on learning techniques has to be:
No learning technique that I am aware of has ever done as well as hand tuned values.

I have some ideas that I hope will eventually change that, but it is a very difficult problem.

Dave Gomboc did an experiment with evolution techniques to improve chess evaluation which had an interesting result.

You can probably find some interesting papers on learning in games here:
http://cap.connx.com/chess-papers/
mridul

Re: Back with a whimper !

Post by mridul »

Great to hear from you Gerd !
I was just looking at my old code - wish I had commented it better :-)
Should be uphill task to modify that, or come up with something new ...

how are the new procs doing btw ? i will probably contact you offline about performance aspects ... i have not really been keeping in touch with latest developments from intel or amd ...


Thanks,
Mridul
mridul

Re: Back with a whimper !

Post by mridul »

Dann Corbit wrote:For learning, most people use td-lambda or td-leaf. KnightCap is a sample engine that used learning for improvement of evaluation.

I have a procedure I like to use to find a good starting point for evaluation terms:

Fit a parabolic curve through data points.

It is easy to use this technique to create the strongest possible tactical engine. It is very difficult to use this technique to create a strong game playing engine. But the tactical constants are probably useful as a starting point.

There is also the famous book and position learning.

However, the caveat on learning techniques has to be:
No learning technique that I am aware of has ever done as well as hand tuned values.
General machine learning theories and ideas never worked for me when it came to chess, too many dependent interactions ... and splitting it all out into individual features such that they are independent just resulted in explosion of features - atleast for me, it was not very practical to train it (dataset sizes required, slowness in code - my code is already piss slow !, etc ).
Simple things like pin analysis, mobility, etc interact with other aspects too much ...


That being said, I cant let the status quo continue and rely only on hand tuned values ! Too much guesswork - want some way to validate it at the least :-)


Going over your ftp, thanks for the link !

Regards,
Mridul
Dann Corbit wrote: I have some ideas that I hope will eventually change that, but it is a very difficult problem.

Dave Gomboc did an experiment with evolution techniques to improve chess evaluation which had an interesting result.

You can probably find some interesting papers on learning in games here:
http://cap.connx.com/chess-papers/