An engine has some tuned parameters in the search and in the evaluation that make it find lines of play following his own "personality", if you let me tell it this way.
This is intrinsically bad, as sometimes this "personality" will miss stuff because something fixed (the parameters), even if those parameters are not completelly fixed and have some range of tolerance, so something fixed is not intelligent. Is like someone that plays always attacking chess, you cannot expect that him will outplay positionally often his rivals even if the position was favorable to it.
So the queen value for example at 950 cp, even if is modified with mobility, open files, attacking stuff and others, will have a bad value if it's in a1 square with 0 mobility. The current engines are not able to understand this, nor will be, as you need much more parameters and code to account for other possibilities like this one.
Hence someone/something more intelligent, less reactive and more analytical, will overcome his rivals, and AlphaZero is supposed to be there.
Is modern chess software lossless or lossy?
Moderators: hgm, Rebel, chrisw
-
- Posts: 2204
- Joined: Sat Jan 18, 2014 10:24 am
- Location: Andorra
-
- Posts: 10
- Joined: Wed Mar 11, 2015 9:42 pm
Re: Is modern chess software lossless or lossy?
Thank you! The historical perspective sheds a lot of light into what the other party had in mind. So, it is technically true that "This approach was abandoned decades ago", but it was also re-introduced (a smaller number of) decades ago.syzygy wrote: The pure Type B programs overlooked too many important moves, so Type A programs took over. See also the Chess (Program) wiki page.
But then people started to add pruning and reduction heuristics like the null-move pruning and late-move reductions, which again made the search selective.
-
- Posts: 10
- Joined: Wed Mar 11, 2015 9:42 pm
Re: Is modern chess software lossless or lossy?
Well, no, that was not the question.Fulvio wrote:https://en.wikipedia.org/wiki/Alpha%E2% ... provementsMeni Rosenfeld wrote:Compared to vanilla Alpha-Beta, which only prunes irrelevant possibilities and is thus lossless.
So, which is true? And is there a clear, concise, reliable reference which unambiguously answers this question?
"Further improvement can be achieved without sacrificing accuracy"
If your question is: Is mathematically proved that improved Alpha-Beta algorithms (with aspiration windows, etc...) give the same results of plain minimax? the answer is yes.
I know that there are methods to improve efficiency which are lossless, i.e. give the exact same answer as full minimax.
I was asking if lossy methods are also used - that is, methods which give a different result, but save enough time (for a given depth) to be worth it.
The answer, it turns out, is yes.