Page 5 of 6

Re: Texel 1.07

Posted: Wed Oct 11, 2017 2:42 pm
by styx
About the UCI strength setting:

Is it possible that Texel plays with full strength unless you lower the value below 100?

I let Texel play itself with level 1000 against level 100 and both games were drawn.

If so, why is the value range between 0 and 1000?

I noticed it when I played Texel in Droidfish at level 200 and got beaten up so badly, multiple times. Then I checked the PC version and it seems to be the same.

Re: Texel 1.07

Posted: Wed Oct 11, 2017 7:22 pm
by petero2
styx wrote:About the UCI strength setting:

Is it possible that Texel plays with full strength unless you lower the value below 100?

I let Texel play itself with level 1000 against level 100 and both games were drawn.

If so, why is the value range between 0 and 1000?

I noticed it when I played Texel in Droidfish at level 200 and got beaten up so badly, multiple times. Then I checked the PC version and it seems to be the same.
I can confirm that if you use more than one search thread the strength settings has a much smaller effect in 1.07 than in 1.06. When I tested, the difference between strength=100 and full strength was only around 200 elo when using 2 cores. When using one core the difference between strength=100 and full strength is huge.

So a workaround should be to set Threads=1 in the UCI settings. This also has the advantage of saving electricity and battery power.

For the next version of texel I might change this so that "Threads" is ignored when strength<1000 and force the engine to use only one thread in this case.

Re: Texel 1.07

Posted: Wed Oct 11, 2017 8:12 pm
by styx
I can confirm that. I just played a test game against level 100 at one core and it felt like - at least - 1500 ELO weaker.

Just out of curiosity: I am no good at code interpretation so may I ask why does increasing the number of cores make it play so much better?

Your workaround Idea seems reasonable.

Thank you for all your work :)

Re: Texel 1.07

Posted: Wed Oct 11, 2017 9:23 pm
by petero2
styx wrote:I can confirm that. I just played a test game against level 100 at one core and it felt like - at least - 1500 ELO weaker.

Just out of curiosity: I am no good at code interpretation so may I ask why does increasing the number of cores make it play so much better?
The basic problem is that the strength setting is ignored by all search threads except the first. This did not cause any problem in the old (1.06) search algorithm, but when I switched to lazy SMP in 1.07 a side effect was that the search threads running at full strength started to contribute a lot to the overall strength.

It was a mistake to not test this before the 1.07 release. I have updated the android texel app to fix this problem. Thanks for reporting this bug.

Re: Texel 1.07

Posted: Mon Oct 23, 2017 6:34 pm
by Jamal Bubker
Thank you a lot Peter ! :D

Re: Texel 1.07

Posted: Fri Nov 03, 2017 12:12 am
by yorkman
Hi Peter,

To use the cluster version of Texel I assume it only works when starting Texel from the command prompt?

I'd like to be able to use it in a gui like Aquarium. Is this possible? If not will you make it possible perhaps in the next release? I think it'd be hard to use 4 engines for example and trying to set up each one with the position you want searched...I assume only host1 (in your windows example) would show the searches from all 4 engines and the best move?

Great work regardless! There are lots of users interested in a cluster version and I'd be happy to test it out but I need to know more about how to use it in Windows and Aquarium or another gui if Aquarium's not compatible.

Re: Texel 1.07

Posted: Fri Nov 03, 2017 6:55 am
by Dann Corbit
yorkman wrote:Hi Peter,

To use the cluster version of Texel I assume it only works when starting Texel from the command prompt?

I'd like to be able to use it in a gui like Aquarium. Is this possible? If not will you make it possible perhaps in the next release? I think it'd be hard to use 4 engines for example and trying to set up each one with the position you want searched...I assume only host1 (in your windows example) would show the searches from all 4 engines and the best move?

Great work regardless! There are lots of users interested in a cluster version and I'd be happy to test it out but I need to know more about how to use it in Windows and Aquarium or another gui if Aquarium's not compatible.
Works fine for me. Did you install it as a UCI engine?
Click on the Engines option (it looks like a brain).
Click on the big green Plus sign to add an engine.
Browse to the engine and add it as UCI.

Re: Texel 1.07

Posted: Fri Nov 03, 2017 3:22 pm
by yorkman
Sorry if I didn't explain properly.

I know how to add an engine in Aquarium (been using it for years). But how would I add it and still have cluster functionality since to start the engines you have to run all of them on all computers with:

smpd -d 0
mpiexec -hosts host1,host2,host3,host4 /path/to/texel (example: C:\Engines\texel.exe)

...instead of the engine itself. This suggests that mpiexec starts the engine. Obviously mpiexec is not an uci engine so it won't add to any gui.

I'm also confused about this comment in the readme.txt:

"Running on a cluster is an advanced functionality and probably requires some knowledge of cluster systems to set up."

If all we have to do is to run the above two exe's and have mpi installed then it's very easy and doesn't require any cluster systems knowledge. Or is this to suggest that we have to use Win'2012 server and set up the native MS clustering on all the servers and then run those exe's?

Re: Texel 1.07

Posted: Fri Nov 03, 2017 5:06 pm
by petero2
yorkman wrote:Sorry if I didn't explain properly.

I know how to add an engine in Aquarium (been using it for years). But how would I add it and still have cluster functionality since to start the engines you have to run all of them on all computers with:

smpd -d 0
mpiexec -hosts host1,host2,host3,host4 /path/to/texel (example: C:\Engines\texel.exe)

...instead of the engine itself. This suggests that mpiexec starts the engine. Obviously mpiexec is not an uci engine so it won't add to any gui.
"smpd" must be run on all computers but mpiexec should only be run on the one computer where you want to run the GUI. When you start texel in cluster mode using mpiexec, it should report the total number of cores found. Have you been able to run the mpiexec command from a command prompt and get output like "info string cores:48 threads:96"? If that works the next step is to make it work in a GUI.

It is correct that "mpiexec" by itself is not a UCI engine, but the full mpiexec command including the command line arguments is a UCI engine, so it can be installed as one in a GUI provided that the GUI will let you specify command line arguments for the engine. I have tried this in arena and it worked. I don't know about other GUIs.

If your GUI does not allow specifying command line arguments, it might be possible to create a .bat file containing the mpiexec command and use the .bat file as a UCI engine in the GUI.

Note that every time you want to use the configured engine in the GUI, you must first start smpd on all computers in the cluster. There might be a way to automate that but I don't know how.
yorkman wrote:I'm also confused about this comment in the readme.txt:

"Running on a cluster is an advanced functionality and probably requires some knowledge of cluster systems to set up."
I wrote that before I figured out the steps to set up MS-MPI in windows, so it is probably a little less true now than it was when I wrote it. However, setting up cluster texel is a lot more advanced than just installing a regular UCI engine, and if something goes wrong it is probably quite hard to troubleshoot unless you have some understanding of MPI and clusters in general.

Re: Texel 1.07

Posted: Fri Nov 03, 2017 7:52 pm
by yorkman
Unfortunately I don't see a way to add a batch file as engine in Aquarium. Perhaps some of the other members can pinpoint a way for me.

I've installed MS-MPI 8.1 on both hosts (rita and suzy).
I have placed texel64cl.exe in c:\temp on both hosts 'rita' and 'suzy'
I ran 'smpd -d 0' on hosts 'rita' and 'suzy'. Then I ran the command below from 'rita' for just one host and got:

C:\>mpiexec -hosts 1 rita c:\temp\texel64cl.exe
info string cores:36 threads:36

Good.

Tried it with the other host 'suzy':
C:\>mpiexec -hosts 1 suzy c:\temp\texel64cl.exe
info string cores:4 threads:4

Good. So both work as long as I specify one host only.

But when I try both hosts it just sits there trying to connect for a few mins. before finally returning:

C:\>mpiexec -hosts 2 rita suzy c:\temp\texel64cl.exe

job aborted:
[ranks] message

[0] terminated

[1] fatal error
Fatal error in MPI_Send: Other MPI error, error stack:
MPI_Send(buf=0x0000000000B776A0, count=2, MPI_INT, dest=0, tag=0, MPI_COMM_WORLD) failed
[ch3:sock] failed to connnect to remote process 035b5cb0-cb93-4687-9a8d-afa643fe4a10:0
unable to connect to 192.168.0.11 on port 53191, A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (errno 10060)
unable to connect to 192.168.0.11 on port 53191, A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (errno 10060)

---- error analysis -----

[1] on suzy
mpi has detected a fatal error and aborted c:\temp\texel64cl.exe

---- error analysis -----

I realize your readme says to use -hosts host1,host2 but this is invalid as the /? for mpiexec shows it must be separated by spaces not commas and you also have to precede it with the number of hosts; so I was getting other errors when using your example (outdated I assume).

So I'm guessing I'm not specifying the parameters correctly for two hosts since /? says:

Usage:

mpiexec [options] executable [args] [ : [options] exe [args] : ... ]
mpiexec -configfile <file name>

Common options:

-n <num_processes>
-env <env_var_name> <env_var_value>
-wdir <working_directory>
-hosts n host1 [m1] host2 [m2] ... hostn [mn]
-cores <num_cores_per_host>
-lines
-debug [0-3]

Examples:

mpiexec -n 4 pi.exe
mpiexec -hosts 1 server1 master : -n 8 worker (here only 1 host is specified but there are two entries that follow it so I'm confused.