[OPTIONS]
Network weights file path=<autodiscover>
Number of worker threads=2
NNCache size=200000
NN backend to use=opencl
Scale thinking time=2.660000
Move time overhead in milliseconds=100
Time weight curve peak ply=26.200001
Time weight curve width left of peak=82.000000
Time weight curve width right of peak=74.000000
Ponder=false
Minibatch size for NN inference=256
Max prefetch nodes, per NN call=32
Cpuct MCTS option=3.400000
Initial temperature=0.000000
Moves with temperature decay=0
Add Dirichlet noise at root node=false
Display verbose move stats=false
Aversion to search if change unlikely=1.470000
First Play Urgency Reduction=0.900000
Length of history to include in cache=1
Policy softmax temperature=2.200000
Allowed node collisions, per batch=32
Out-of-order cache backpropagation=false
Ignore alternatives to checkmate=false
Configuration file path=lc0.config
Which value to edit if the engine exceeds the limit? Scale thinking time? More or less?
[OPTIONS]
Network weights file path=<autodiscover>
Number of worker threads=2
NNCache size=200000
NN backend to use=opencl
Scale thinking time=2.660000
Move time overhead in milliseconds=100
Time weight curve peak ply=26.200001
Time weight curve width left of peak=82.000000
Time weight curve width right of peak=74.000000
Ponder=false
Minibatch size for NN inference=256
Max prefetch nodes, per NN call=32
Cpuct MCTS option=3.400000
Initial temperature=0.000000
Moves with temperature decay=0
Add Dirichlet noise at root node=false
Display verbose move stats=false
Aversion to search if change unlikely=1.470000
First Play Urgency Reduction=0.900000
Length of history to include in cache=1
Policy softmax temperature=2.200000
Allowed node collisions, per batch=32
Out-of-order cache backpropagation=false
Ignore alternatives to checkmate=false
Configuration file path=lc0.config
Which value to edit if the engine exceeds the limit? Scale thinking time? More or less?
If it's not CUDA version but opencl / blas, then "Minibatch size for NN inference=16" should help.
If it's CUDA, it's worth looking further and fixing a bug.