What would you do differently

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: What would you do differently

Post by lucasart »

emadsen wrote:
If you were restarting from scratch, what would you do differently?
Hey Lucas. I just started rewriting MadChess using procedural code. To date, I've written it using object-oriented code, which I know performs worse. But I wanted to establish a benchmark using a familiar coding style, which I have: ~2250 ELO.

I'm still coding using C# but eliminating the OO abstractions and focusing on performance instead of code readability. I'm still using mailbox board representation because, quite frankly, I don't understand bitboards and all their clever bit-shifting. And I don't want to copy-paste code I don't understand.

Maybe one of these days I'll graduate to pure C and bitboards- and actually understand the code :). For now, I'll just make MadChess procedural.
It's interesting that you opppse OOP to performance. What I would say is:
1. equivalent code in procedural or OO form should have roughly equivalent performance
2. OO is often misused, and most programmers drank too much OOP kool-aid, which results in absolutly stupid and crappy OO models which make the probpem far. more complicated than it should be. and result in unmaintainable and inefficient code.

It's not OOP itself that results in poor performance. It's misused OO, which causes bad design decisions, hence inefficient and unmaintainable code.

The are problems for which OOP is the rigjt way, for example GUI toolkits.

Here is the perfect example of what you should NOT do (this guy is high on OO kool-aid):
http://sharpchess.com/?page=50%20Develo ... ct%20Model

Trying to fit a class hierarchy on everything is akin to trying to fit squares in round holes. That's why Java is the worse programming language ever created. It has deformed the minds of an entire generation of programmers to think that everything should be an object, which is an apparently clever looking, but really completely idiotic idea.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
User avatar
emadsen
Posts: 434
Joined: Thu Apr 26, 2012 1:51 am
Location: Oak Park, IL, USA
Full name: Erik Madsen

Re: What would you do differently

Post by emadsen »

Eh... a language war. I don't know how I managed to start one when I was saying I'm coming around to the point of view of most chess programmers- that procedural, data-oriented code works best for a chess engine.

I can't make sense of most of what you write. In my line of work OO design is a conscious trade of performance for code readability / maintainability.
Equivalent code in procedural or OO form should have roughly equivalent performance.
I don't know what that is. Sounds like a science project to me. Why prove the equivalence of two features that have drastically different value to the business?

I created an OO model for MadChess that is similar to SharpChess. I don't understand why this is dismissed with hand waving when there exist numerous procedural programs of equal or lesser strength. But you're missing the point. I wrote an OO chess engine to answer the question, what are the capabilities of an OO chess engine? To answer the question what are the capabilities of a procedural chess engine?... I'll write a procedural chess engine. This is all done for intellectual satisfaction, not religious reasons.
Java is the worse programming language ever created. It has deformed the minds...
Managed code + OO design solves real business problems. To accept your point of view you're asking me to believe that businesses- in a world of intense competition and tight margins- that businesses are selecting modern development technologies out of mental derangement, rather than a profit motive. You're nuts to believe that.
My C# chess engine: https://www.madchess.net
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: What would you do differently

Post by mar »

BeyondCritics wrote:there is nothing exceptional.
That's exactly the point :)
The idea is simply to use one dynamic array (vector) to store entries (items) and another slim one to maintain head of a singly-linked list (using indices rather than pointers).
Another advantage of this is that as long as you don't remove entries from the table, you can iterate over the elements in the order in which they were inserted.
The only drawback is that using 32-bit indices, the table is limited to 1G entries (but seriously who needs more?). Of course you can use 64-bit indices at the expense of using more memory.
You are using a fast modulo implementation, simply cutting of the higher digits. I "corrected" for that, replacing

Code: Select all

size_t index = h & ((size_t)sz-1);
with

Code: Select all

size_t index = h % ((size_t)sz);
but it is still roughly two times slower.
I would be careful here, AFAIK ARM doesn't have an instruction for integer division (maybe the latest chips do?), which can be a performance killer.
Of course your change allows to be more conservative about memory usage (but that would also increase load factor),
yet I still think that having 2x as many entries in the slim table (32-bit ints) is ok if you consider that bookkeeping per block on heap is usually 2 times the size of a pointer
(sometimes half of that if you're lucky and can reuse space left after the block due to alignment).
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: What would you do differently

Post by hgm »

As I have started all over from scratch so many times already, there is very little that I would do different now.

One thing I still regret in Shokidoki is that it does not use a piece list, but scans the board to find its own pieces to generate moves for. I figured that, since almost all pieces in games with drops stay on the board, and potentially all pieces could be transferred to one side, the piece list would have to be nearly as long as the board, so there would be little benefit. But the absence of a piece list makes it difficult in evaluation to do things like looping over all Pawns. I guess I can still repair this by adding a piece list, and speed up the scan through it by using a bitmap to indicate the pieces that are present, so the scan through it can be sped up by bit-extraction techniques.

In HaChu I regret the format I have chosen for the attack map, which contains the number of attacks from each direction on occupied squares. It seems much more efficient to just store the distance to the nearest obstacle in every direction. But also here it is not such a fundamental change that it would require a complete rewrite.

In general I regretted almost everything that I solved by dedicated code, rather than through some table-driven mechanism. Like testing where promotion is possible, where Pawns can do double-steps, how pieces of a certain type move, etc. It is so much more flexible to control things by preparing a table.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: What would you do differently

Post by mvk »

lucasart wrote:Here is the perfect example of what you should NOT do (this guy is high on OO kool-aid):
http://sharpchess.com/?page=50%20Develo ... ct%20Model
That is a nice example indeed. Here is another one going overboard with programming language madness.

Anyway, I have such a thing in Rookie v3 also. There is a DSL in the initialization of the evaluation tables: a kind of regular expression language processor for pawn+king formations. I would not do it the same way again.

The anti-pattern is the same: excessive focus on the mechanics of programming, and not on the application at hand.
[Account deleted]
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: What would you do differently

Post by lucasart »

mvk wrote:
lucasart wrote:Here is the perfect example of what you should NOT do (this guy is high on OO kool-aid):
http://sharpchess.com/?page=50%20Develo ... ct%20Model
That is a nice example indeed. Here is another one going overboard with programming language madness.
"C++ is to C what lung cancer is to lung".
seriously, start using a little bit of C++, start making code more object oriented and "generic". you know how it begins, but you don't know where it ends. The truth is that it never ends, and you keeping sinking further down the rabbit hole...

Once a project depends on Boost, it's too late for the chimiotherapy, cancer has spread too far already ^^

This trivial distance example is laughable, but it well summarizes the problem. Don't waste time with over sophistication, and just solve the problem. That's my advice anyway.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Rein Halbersma
Posts: 741
Joined: Tue May 22, 2007 11:13 am

Re: What would you do differently

Post by Rein Halbersma »

lucasart wrote:
mvk wrote:
lucasart wrote:Here is the perfect example of what you should NOT do (this guy is high on OO kool-aid):
http://sharpchess.com/?page=50%20Develo ... ct%20Model
That is a nice example indeed. Here is another one going overboard with programming language madness.
"C++ is to C what lung cancer is to lung".
seriously, start using a little bit of C++, start making code more object oriented and "generic". you know how it begins, but you don't know where it ends. The truth is that it never ends, and you keeping sinking further down the rabbit hole...

Once a project depends on Boost, it's too late for the chimiotherapy, cancer has spread too far already ^^

This trivial distance example is laughable, but it well summarizes the problem. Don't waste time with over sophistication, and just solve the problem. That's my advice anyway.
Application code such as a chess engine is always much simpler than generic library code such as STL/Boost which try to be useful to an unbounded set of potential users. "The" problem for libraries is solving a large set of problems at once.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: lazy eval

Post by bob »

mvk wrote:
jdart wrote:Yes, when I had lazy eval, I passed in the bounds.

But if you call it multiple times with different bounds, you can't cache and re-use the result. And this negates some of the benefit of lazy eval. Now I don't use it but the result of eval is cached (and stored in the hashtable).

--Jon
I see lazy eval as a pitfall. If you find it helps a lot, you are searching the wrong part of the tree and should work on that first. And then it keeps working against you when tuning the evaluation weights, and the tree shaping in general because of the lost fidelity: every change in the right direction now first must overcome a barrier raised by this premature optimisation. Development-wise, it therefore tends to keep you locked in a local optimum of program strength. I still have it in the code, but with huge bounds so it is effectively off. This might be different if you're within 30 elo of the top and must scrape the barrel.
I don't view it quite the same. In Crafty, we have several sorts of eval terms.

1. Those that produce big score changes

2. Those that produce small score changes but are computationally expensive.

There are others, but those work to make the point that one can use the second as a pretty save lazy cutoff and avoid the hard stuff. In my case, pawns and passed pawns and such are "biggies". The individual piece stuff involves multiple loops. As I get further into the eval code, I can use a reduced lazy cutoff margin because I know that the possible score changes are minimal.

I've mentioned previously that Bruce Moreland (Ferret) did this absolutely correctly by having a min/max score for each piece type. He incrementally updated the totals so that he knew exactly how big a range his score could cover, and if the current partial score + that margin is less than alpha, no point in going any further in the eval, with absolutely no risk.
User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: lazy eval

Post by lucasart »

Even the Bruce Moreland no risk way iintroduces inefficiency with fail soft. I use fail soft and no lazy eval: that's what works best for me. And I conjecture that lazy eval will not scale, like anything that compromises accuracy for a pesky speed gain(which gains less at LTC...)
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: lazy eval

Post by bob »

lucasart wrote:Even the Bruce Moreland no risk way iintroduces inefficiency with fail soft. I use fail soft and no lazy eval: that's what works best for me. And I conjecture that lazy eval will not scale, like anything that compromises accuracy for a pesky speed gain(which gains less at LTC...)
How can it introduce an efficiency. You NEVER use a lazy cutoff test that is not 100% accurate, because you know the largest possible positional score for each piece type. If you have no bishops, there can be no score for bishops, the lazy eval cutoff window shrinks.