Is 8/16 bit computing inappropriate for chess programming?

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Is 8/16 bit computing inappropriate for chess programming?

Post by sje »

Is 8/16 bit computing wholly inappropriate for chess programming? Perhaps it depends on what is being attempted.

Consider the following, quite popular series of 8/16 bit microcontrollers:

http://atmel.com/dyn/products/devices.asp?family_id=607

Prices in quantity start at under US$1 with the more powerful chips around US$5. Speeds are from 8 MHz to 32 MHz and the architecture runs most instructions in a single clock cycle. RAM is somewhat limited, but EEPROM runs up to 256 KB.

Now what might one do with, say, 64 of these processors? For one thing, you could put together a meta-Belle of sorts with each processor running calculations per a single square. It would be possible to put together a real Belle clone if the chip used had enough I/O pins or if some extra latches were used. But a stronger chess player would make better use of the processing available at each square. And unlike the circuitry use to build Belle, these microcontrollers have built-in network capability that reduces the need for various buses.

Now consider the possibility of using 4,096 of these microcontrollers. That's one controller for each pair of squares, so there's a controller for each possible move (promotions are a special case). With some creative wiring and coding, one could build a HiTech clone of sorts.
wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 5:03 pm
Location: British Columbia, Canada

Re: Is 8/16 bit computing inappropriate for chess programmin

Post by wgarvin »

If I had $4000 or more to spend on dedicated hardware for computer chess, I'd probably invest it in something more like this:
http://gizmodo.com/gadgets/microwulf/su ... 296015.php

...but it wouldn't be anywhere near as different or interesting as a chess computer with 4096 microcontrollers in it!

[Edit: more direct link: http://www.calvin.edu/~adams/research/microwulf/ ]
Last edited by wgarvin on Fri Apr 25, 2008 12:35 am, edited 1 time in total.
User avatar
Bill Rogers
Posts: 3562
Joined: Thu Mar 09, 2006 3:54 am
Location: San Jose, California

Re: Is 8/16 bit computing inappropriate for chess programmin

Post by Bill Rogers »

Steven
A fact to remember is that a lot of stand alone chess computer are running with 8 bit cpus. :twisted: I think that 16 bits would make them even stronger. One thing to remember is that no C compiler will ever run as fast a program written directly in assembly or maching code.
Bill
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: Is 8/16 bit computing?

Post by sje »

Yes, back in 1981 I wrote a chess program entirely in z80 assembly. It could only do three ply searches and was mostly a piece of crap. But it did play legal chess, it didn't crash, and it easily beat an early version of Sargon on the same hardware.

Back then, the development tools I had available were rather poor. But that's changed. Also, the z80 really wasn't very good for programming chess and I didn't do much with that processor afterwards.

At the moment I'm doing a homebrew 8/16 bit system based on the modern version of the venerable 6502 microprocessor. It uses the W65C02S chip that's fully static and has several new and quite handy instructions plus a pair of new addressing modes. The chip is rated for 14 MHz operation but is reported to run at 20 MHz without problems. Note that it was back in the mid 1980s that David Kittenger wrote the Super Constellation program that ran on a 5 MHz 6802 and managed a 2014 USCF rating.

The new W65C02C CPU supports, believe it or not, a multiprocessor system as it has special signals and instructions for bus mastering and atomic read/modify interlocks. The chip costs US$8 in quantity one. Full speed, inexpensive RAM and EEPROM chips are available (I have a number of these sitting in front of me right now).
User avatar
Zach Wegner
Posts: 1922
Joined: Thu Mar 09, 2006 12:51 am
Location: Earth

Re: Is 8/16 bit computing inappropriate for chess programmin

Post by Zach Wegner »

sje wrote:Is 8/16 bit computing wholly inappropriate for chess programming? Perhaps it depends on what is being attempted.

Consider the following, quite popular series of 8/16 bit microcontrollers:

http://atmel.com/dyn/products/devices.asp?family_id=607

Prices in quantity start at under US$1 with the more powerful chips around US$5. Speeds are from 8 MHz to 32 MHz and the architecture runs most instructions in a single clock cycle. RAM is somewhat limited, but EEPROM runs up to 256 KB.

Now what might one do with, say, 64 of these processors? For one thing, you could put together a meta-Belle of sorts with each processor running calculations per a single square. It would be possible to put together a real Belle clone if the chip used had enough I/O pins or if some extra latches were used. But a stronger chess player would make better use of the processing available at each square. And unlike the circuitry use to build Belle, these microcontrollers have built-in network capability that reduces the need for various buses.

Now consider the possibility of using 4,096 of these microcontrollers. That's one controller for each pair of squares, so there's a controller for each possible move (promotions are a special case). With some creative wiring and coding, one could build a HiTech clone of sorts.
That's pretty cool. If I had a lot more free time and resources, I'd do something like this:
1) Get a huge number of these things, several hundred
2) Somehow connect all of these to make a "supercomputer"--just a big grid of chips, running like regular CPUs and not special-purpose chips.
3) Rewrite ZCT into a mailbox program
4) Run and optimize ZCT on these processors, hoping to get a speed close to say, a 1GHz processor.

The question is, is (2) possible? And can it run NetBSD? :)
User avatar
Bill Rogers
Posts: 3562
Joined: Thu Mar 09, 2006 3:54 am
Location: San Jose, California

Re: Is 8/16 bit computing?

Post by Bill Rogers »

Unfortunately the Z80 computers at that time did not run fast at all, maybe anywhere form 2 to 4 megahertz.
The 6502s along with the 6800s are pretty good cpus but the only chip designed to be a computer in the first place was designed by Josehp Weisenbaum. It was called the RCA 1802. Joseph run hundred of tests mapping all the instructions of various programs to see what they did and how they handled memory then designed the hardward to optimise those instructions. One very big example of this was DMA or direct memory access among other things. Although the 1802 was a CMOS device it was also voltage independant and the speed that it ran at depended on the voltage that was applied. If in fact that chip has been developed along the same lines as the other computer chips then today it would the most superior one made today.
Bill
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Stealing from Belle

Post by sje »

On Belle: my understanding is that Ken Thomson did the software and Joe Condon did the hardware, and you can steal from both of them. After all, Deep Blue is nothing much more than Belle-on-a-chip, sped up and replicated.

Belle (final version) used a pdp-11 work-alike system that ran Unix that in turn ran the upper level (C language) Belle application. That application did the user interface, the opening book, the tablebases, and connected to the special hardware array for handling terminal subtrees of the search. So a microcontroller array doesn't have to do everything by itself.

The biggest sub problem with doing a Belle-like system with off the shelf microcontrollers is that each Belle cell had several dozen signal pins while most (i.e., cheap) microcontrollers have a rather smaller count. The ways around this are one or more of: expensive, slow, or need lots of extra parts.

But, just imagine some of the algorithms you could try. For example, a 64 CPU array could do positional evaluation by writing a routine that did positional factor evaluation for a single square, slightly modified for each square. The 64 results could be summed (in log2 64 steps = 6 steps) with a final score that otherwise would require much more time.

A 4,096 element array could operate both on a cell/move association and also be split into 64 cell/square arrays operating in parallel.
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

RCA 1802

Post by sje »

Build your own 1802 system from a US$100 kit:

http://www.sparetimegizmos.com/Hardware/Elf2K.htm
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Sample microcontroller array application

Post by sje »

Here's a sample microcontroller array application using a 2 x2 grid.

See: http://blog.makezine.com/archive/2008/0 ... ck_in.html

Image
User avatar
Zach Wegner
Posts: 1922
Joined: Thu Mar 09, 2006 12:51 am
Location: Earth

Re: Stealing from Belle

Post by Zach Wegner »

sje wrote:On Belle: my understanding is that Ken Thomson did the software and Joe Condon did the hardware, and you can steal from both of them. After all, Deep Blue is nothing much more than Belle-on-a-chip, sped up and replicated.

Belle (final version) used a pdp-11 work-alike system that ran Unix that in turn ran the upper level (C language) Belle application. That application did the user interface, the opening book, the tablebases, and connected to the special hardware array for handling terminal subtrees of the search. So a microcontroller array doesn't have to do everything by itself.

The biggest sub problem with doing a Belle-like system with off the shelf microcontrollers is that each Belle cell had several dozen signal pins while most (i.e., cheap) microcontrollers have a rather smaller count. The ways around this are one or more of: expensive, slow, or need lots of extra parts.

But, just imagine some of the algorithms you could try. For example, a 64 CPU array could do positional evaluation by writing a routine that did positional factor evaluation for a single square, slightly modified for each square. The 64 results could be summed (in log2 64 steps = 6 steps) with a final score that otherwise would require much more time.

A 4,096 element array could operate both on a cell/move association and also be split into 64 cell/square arrays operating in parallel.
If I were to get that many chips, I'd probably use them each as a fully-independent processor all working on a tree. There's just already so much speed lost on hardware, that any solution involving dedicating chips to one specific function is doomed for inefficiency (not the point, I know).

Not that that isn't interesting. It's just that I'd probably rather spend the money on an FPGA if I were to go in that direction. Which I've actually been thinking about, even earlier today before you posted.