Great stuff Matthew!
I live in Bristol and randomly picked up a mainstream newspaper the other day to see a review of your engine!
Please keep going with this before trying other games - chess needs you. Also, any joy with the university cluster - will you get to use it long term?
New Giraffe (Sept 8)
Moderator: Ras
-
- Posts: 793
- Joined: Sun Aug 03, 2014 4:48 am
- Location: London, UK
Re: New Giraffe (Sept 8)
Wow cool! I didn't know it's on a mainstream paper!Werewolf wrote:Great stuff Matthew!
I live in Bristol and randomly picked up a mainstream newspaper the other day to see a review of your engine!
Please keep going with this before trying other games - chess needs you. Also, any joy with the university cluster - will you get to use it long term?
I won't get to continue using the university cluster unfortunately

Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
-
- Posts: 12792
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: New Giraffe (Sept 8)
I think the idea of using neural nets to choose the search parameters is even more interesting than the evaluation.
Something of note:
Typical chess engines will guess right on the pv node more than 90% of the time (much higher than your current net-driven approach).
A hybrid approach might be very interesting indeed. Perhaps you can add "memory" to the neural net search choice in an approach similar to IID.
I think your chess paper is the most interesting neural approach to chess that I have ever read. It was very clear and well written. Thank you for your efforts in this regard.
Something of note:
Typical chess engines will guess right on the pv node more than 90% of the time (much higher than your current net-driven approach).
A hybrid approach might be very interesting indeed. Perhaps you can add "memory" to the neural net search choice in an approach similar to IID.
I think your chess paper is the most interesting neural approach to chess that I have ever read. It was very clear and well written. Thank you for your efforts in this regard.
-
- Posts: 7251
- Joined: Mon May 27, 2013 10:31 am
Re: New Giraffe (Sept 8)
What ELO would Giraffe achieve approximately if you would only leave out Sliding Pieces Mobility and Attack and Defend maps ? (And redo tuning of course)
-
- Posts: 793
- Joined: Sun Aug 03, 2014 4:48 am
- Location: London, UK
Re: New Giraffe (Sept 8)
Leaving out attack and defend maps is actually something I wanted to try (and probably will try at some point).Henk wrote:What ELO would Giraffe achieve approximately if you would only leave out Sliding Pieces Mobility and Attack and Defend maps ? (And redo tuning of course)
I won't leave out sliding pieces mobility because that's very fast to compute.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
-
- Posts: 793
- Joined: Sun Aug 03, 2014 4:48 am
- Location: London, UK
Re: New Giraffe (Sept 8)
Thanks!Dann Corbit wrote:I think the idea of using neural nets to choose the search parameters is even more interesting than the evaluation.
Something of note:
Typical chess engines will guess right on the pv node more than 90% of the time (much higher than your current net-driven approach).
A hybrid approach might be very interesting indeed. Perhaps you can add "memory" to the neural net search choice in an approach similar to IID.
I think your chess paper is the most interesting neural approach to chess that I have ever read. It was very clear and well written. Thank you for your efforts in this regard.
The Giraffe approach is already hybrid. Neural network move sorting is only done for "other" moves. In other chess engines, this would be done with history tables.
It still uses hash move and killers when they are available. I looked for ways to incorporate that information into move representation for neural networks, but didn't really find something that works.
Another idea I tried is to continue training the network during gameplay, so it can learn game-specific patterns. However, this is very difficult. I wasn't able to get it to work.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.