tomitankChess - New JavaScript engine

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: tomitankChess - New JavaScript engine

Post by tomitank »

Szia Gábor!

Nagyon köszönöm!
Igen szándékosan van így írva :)

Graham már elvileg indított tesztet, de lehet hogy mégsem.
Örülnék ha tudnám, hogy jelenleg hol állok egy hiteles mérés után.
Egyébként, ha lenne rá időd elküldeném üzenetben a legfrissebb "stabil" verziót is, ami természetesen nincs titkosítva, így egy kicsit gyorsabban fut.
Emellett néhány paraméter is finomítva van benne. Méréseim szerint erősebbnek kell lennie, mint ami a github-on van megosztva.
Ez lesz majd az 1.5-ös verzió.

Célom jeleneg, hogy a Lozza 1.7-es verziót megelőzzem, ezzel átvéve a vezetést a tiszta JavaScript nyelvű sakkmotorok között.
Amennyire én tudom a Lozza a világ legerősebbje ami ténylegesen ezen a nyelven van megírva(tehát nem C-ről fordítotték le valami olvashatatlan formába)

-Tamas
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: tomitankChess - New JavaScript engine

Post by hgm »

If we assume that fixed-size arrays containing smple numbers and indexed by numbers have an efficient implementation usig 8 bytes per element, you could simply define the hash table as one big arrays, the even elements containing the signatures, and the following odd element containing everything else.

You would then have to pack and unpack that element yourself. But only after you have established you have a hit. With bit-masking operations you can only access the lowest 32 bits of it, but this would be enough for move, depth and flags. The score you can retreive by dividing the original FP number through 2^24; this would leave the other bits as a small fraction, but who cares?

Code: Select all

if(brd_HashTable[index] == hashKey) {
  var flags = brd_HashTable[index+1] & -1;
  var move = flags & 0x7FFF;
  var depth = flags >> 15 & 0x7F;
  var score = brd_HashTable[index+1] / 0x1000000;
}
From the description of V* it seems indeed to work differently. Variables are by default 31-bit integers, not 64-bit floating point. Only if they are something different (e.g. because they would overflow 31 bit, or because they are strings or objects, a object is created to hold them. Presumably the original int then acts as a pointer to that object, and perhaps indicates the 'hidden class' to which the object belongs. (I assume that the hidden class for objects that just contain a single 64-bit FP number is somehow implied, rather than that it needs to be explicitly indicated in some other storage location, because it is so common. This is at least how I would do it.)

That would still mean that a general 64-bit number would need 12 bytes, a 4-byte index into the table of actual 64-bit values, plus the 8-byte value. The consequence is that we would be off far better using three 31-bit array elements of a V8 array, (12 bytes), than when using two 64-bit numbers there (24 bytes). On a conventional interpreter this would be 24 bytes vs 16 bytes, however.
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: tomitankChess - New JavaScript engine

Post by hgm »

I now see that there is also something called 'typed array' in JavaScript. So that it would be possible to specify that the array elements would have to be 32-byte values, irrespective of the interpreter used. It seems just what we need ("brd_HashTable = new Uint32Array()"). Is this already uiversally supported?
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: tomitankChess - New JavaScript engine

Post by tomitank »

hgm wrote:I now see that there is also something called 'typed array' in JavaScript. So that it would be possible to specify that the array elements would have to be 32-byte values, irrespective of the interpreter used. It seems just what we need ("brd_HashTable = new Uint32Array()"). Is this already uiversally supported?
tomitank wrote: Lozza optimize this with V8 typed arrays, but Lozza not running some mobil browser (because of V8 arrays)
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: tomitankChess - New JavaScript engine

Post by hgm »

Are you saying this is exclusive to V8? This wasn't clear at all to me in any of the places I saw it described. And the link you gave over how V8 does store data doesn't mention them at all. E.g. MicroSoft describes it, and I thought V8 was Google/Chrome.
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: tomitankChess - New JavaScript engine

Post by tomitank »

typed array Browser compatibility = eg: Android 4.0
Lozza crash on my older android mobile.
I am write this engine mainly for my mobile app.

Here is a very useful description:
Typed Arrays were designed by the WebGL standards committee, for performance reasons. Typically Javascript arrays are generic and can hold objects, other arrays and so on - and the elements are not necessarily sequential in memory, like they would be in C. WebGL requires buffers to be sequential in memory, because that's how the underlying C API expects them. If Typed Arrays are not used, passing an ordinary array to a WebGL function requires a lot of work: each element must be inspected, the type checked, and if it's the right thing (e.g. a float) then copy it out to a separate sequential C-like buffer, then pass that sequential buffer to the C API. Ouch - lots of work! For performance-sensitive WebGL applications this could cause a big drop in the framerate.

On the other hand, like you suggest in the question, Typed Arrays use a sequential C-like buffer already in their behind-the-scenes storage. When you write to a typed array, you are indeed assigning to a C-like array behind the scenes. For the purposes of WebGL, this means the buffer can be used directly by the corresponding C API.

Note your memory address calculation isn't quite enough: the browser must also bounds-check the array, to prevent out-of-range accesses. This has to happen with any kind of Javascript array, but in many cases clever Javascript engines can omit the check when it can prove the index value is already within bounds (such as looping from 0 to the length of the array). It also has to check the array index is really a number and not a string or something else! But it is in essence like you describe, using C-like addressing.

BUT... that's not all! In some cases clever Javascript engines can also deduce the type of ordinary Javascript arrays. In an engine like V8, if you make an ordinary Javascript array and only store floats in it, V8 may optimistically decide it's an array of floats and optimise the code it generates for that. The performance can then be equivalent to typed arrays. So typed arrays aren't actually necessary to reach maximum performance: just use arrays predictably (with every element the same type) and some engines can optimise for that as well.

So why do typed arrays still need to exist?

Optimisations like deducing the type of arrays is really complicated. If V8 deduces an ordinary array has only floats in it, then you store an object in an element, it has to de-optimise and regenerate code that makes the array generic again. It's quite an achievement that all this works transparently. Typed Arrays are much simpler: they're guaranteed to be one type, and you just can't store other things like objects in them.
Optimisations are never guaranteed to happen; you may store only floats in an ordinary array, but the engine may decide for various reasons not to optimise it.
The fact they're much simpler means other less-sophisticated javascript engines can easily implement them. They don't need all the advanced deoptimisation support.
Even with really advanced engines, proving optimisations can be used is extremely difficult and can sometimes be impossible. A typed array significantly simplifies the level of proof the engine needs to be able to optimise around it. A value returned from a typed array is certainly of a certain type, and engines can optimise for the result being that type. A value returned from an ordinary array could in theory have any type, and the engine may not be able to prove it will always have the same type result, and therefore generates less efficient code. Therefore code around a typed array is more easily optimised.
Typed arrays remove the opportunity to make a mistake. You just can't accidentally store an object and suddenly get far worse performance.

So, in short, ordinary arrays can in theory be equally fast as typed arrays. But typed arrays make it much easier to reach peak performance.
Otherwise you are right! That would be better.
It is possible that I will not supporting old browsers...
I need to measure its usefulness
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: tomitankChess - New JavaScript engine

Post by hgm »

Unfortunately the idea to use additive FP keys will not work, because adding FP numbers can cause loss of precision. So it would not be possible to update the key incrementally: if you later subtract the contriution of a (piece,square) key for UnMake, you might bot get the same key as before you added it; the low-order bits could differ. Also, FP addition is not associative, and the order of additions would matter for the outcome. That means you might get different keys after transposition of the same moves.

The whole problem would go away if you have sufficiently many zero bits at the low-order end of the Zobrist keys. But then you would not have 52 bits effective key length anymore.

So I guess that kills the idea.

I read that most newer JavaScript interpreters are also clever (although perhaps not as clever as V8), and can recognize whether all elements that a program stores in an array are of the same type (e.g. 32-bit int), and then automatically treat that array like it is an Int32Array. So perhaps it is just a matter of starting by initializing the whole hashTable array to -300, to allow clever iterpreters to recognize that it can treat it all as 32-bit integers without the need to keep track of the individual data type of each element. And then pack an entry in 3 ints (e.g. key signature, score+depth, move+flags)
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: tomitankChess - New JavaScript engine

Post by tomitank »

hgm wrote:So perhaps it is just a matter of starting by initializing the whole hashTable array to -300,
It will not work.
Array init with (28 << 20) / 14 length too big for most web browser. Like infinite loop.
Only this work for me:
hashArray = null;
hashArray = new Array(ENTRIES);
Every block is "null".
User avatar
hgm
Posts: 27790
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: tomitankChess - New JavaScript engine

Post by hgm »

tomitank wrote:
hgm wrote:So perhaps it is just a matter of starting by initializing the whole hashTable array to -300,
It will not work.
Array init with (28 << 20) / 14 length too big for most web browser. Like infinite loop.
Only this work for me:
hashArray = null;
hashArray = new Array(ENTRIES);
Every block is "null".
If you cannot even set all entries to -300, how could you ever store a complete HashEntry in all of them during the game? If (2 << 20) is too big, then the only solution would be to make it smaller. If the largest part of the entries should remain null, you might as well not have those at all.

As to the problem observed by Gabor: I think this must be a result from assigning a new HashEntry on every replacement. The old entry will also continue to exist, and occupy memory.
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: tomitankChess - New JavaScript engine

Post by tomitank »

hgm wrote: The old entry will also continue to exist, and occupy memory.
I use "Always Replace" strategy.
The problem is that 1 entry use too big memory.

That would be the ideal (14 Byte):

Code: Select all

		this.move			= move; // 20 bit
		this.flags			= flags; // 2 bit
		this.depth			= depth; // 7 bit
		this.score			= score; // 15 bit
		this.hashKeyLow		= hashLow; // 32 bit
		this.hashKeyHigh	= hashHigh; // 32 bit
In reality(~48 Byte):

Code: Select all

		this.move			= move; // 64 bit
		this.flags			= flags; // 64  bit
		this.depth			= depth; // 64  bit
		this.score			= score; // 64 bit
		this.hashKeyLow		= hashLow; // 64 bit
		this.hashKeyHigh	= hashHigh; // 64 bit
and even the object (as you said)

This is around 100 Mb. by (2097152) entries.

Code: Select all

hashArray = null;
hashArray = new Array&#40;ENTRIES&#41;; 
This means so that it will be eligible for garbage collection.
It's not sure, that will be deleted immediately.
That's why it can be 200 or 300 mb.
But that's just theory.