Ckappe wrote: ↑Sat Feb 20, 2021 12:22 am
Ckappe wrote: ↑Fri Feb 19, 2021 11:26 pm
George Sobala wrote: ↑Fri Feb 19, 2021 6:02 pm
Ckappe wrote: ↑Fri Feb 19, 2021 5:18 pm
SF is likely not the strongest on a weak ARM hardware like the M1 compared to laptops with 20xx 30xx graphics.<
So dazzle me how many nodes (from start-pos) can your laptop M1 run before your battery runs out with latest SF and default Network
And btw the strained argument of a "long flight" is also a bit moot as most longhauls these days have 110/220v outlets by the seats...
60G to 5% charge left.
How long did it take? (just starting a test on my zenbook duo laptop to compare

as it does 17mnps (on batter powered) I expect 60G will be reached in an hour.. I'll measure how much % of battery that was drained
Test on my Ryzen laptop done now.. 60G pos in SF-nnue took a little more than an hour and was almost exactly 50% left of battery-.. So I could make the same analysis and still watch some movies on the flight.. (if I by some obscure reason bought a ticket in monkey-class and did not have access to power plugs by the seat

)
The 60G took about 3 hours.
I have moved on from the plane, and the airport with all those different waiting areas one must pass through (glad I didn't have power socket anxiety) and am now sitting (posing?) under a tree in the countryside, soaking in the quiet and listening to distant birdsong, sipping my soy caramel latte-chino.
Well done. Though I am curious why you did not finish the test. Battery meters are not the most reliable indicators, are they? I wonder if you were listening to the fan-noise and beginning to have concerns about the thermal stability of the system.
Of course you were right, Stockfish was the wrong engine for me to test on M1. It is not particularly arm64 friendly. Ronald de Man has done a lot better with neon code for NNUE with CFish, which of course uses the same net, search and HCE as SF and scores higher in rating lists. I get about 30% higher nps with CFish and will start a full test tomorrow: 80G seems pretty likely, maybe I will hit 90G.
As far as a nodes test with lc0, you will win hands down, quite possibly by the order of 100x or so. Without nVidia / cuda I cannot compete with that. But I am very interested in how long your system can cope with lc0 at full throttle running e.g. J94-100 and how many nodes it generates. However on your system and my system, Stockfish / CFish are stronger engines.
A more realistic use of Leela is by someone who actually plays chess rather than uses chess engines. He/ she / they would be using nibbler with leela running constantly, as they flip to and fro through variations they wish to explore. Whilst the nps generated by a full-throttle Leela on a RTX are very welcome, in the disconnected laptop scenario this is of little use if the battery time is measure in minutes rather than hours.The positional insights given by Leela running slower but more efficiently may be preferable - any key line will always be checked by SF/CFish in any case. So once I have done the CFish test I will give Leela a whirl running firstly on GPU (which I suspect will give poor performance with appalling power use) and then CPU (which may be only slightly slower but may last longer).
Finally one must consider other details of a system.
A MacBook Air M1 weighs 1.29kg (2.8lb), has a 2560x1600 IPS screen, is completely silent, and costs $999.
What are your specs / cost?