It had a transposition table, but no hash move. He had no way to feed a suggested move into the search. Deep Thought / Deep Blue had exactly the same issue, and in fact, the Deep * programs from Hsu didn't even use transposition tables in the hardware. DB2 had the ability, but he never built the chess boards with memory on them, although the processors could probe hash if they had it. But again, no hash move ordering The problem was that either searched nodes faster than they could be looked up.sje wrote:I seem to recall that the final Belle hardware did have an integral transposition table; one MB of RAM. It was used both for move ordering and score adjustment along with repetition detection.bob wrote:Belle could only use a very simple move ordering in the hardware, no killers, no hash move, just MVV/LVA ordered captures, then the rest of the moves.
Question about Program called Belle
Moderator: Ras
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Question about Program called Belle
-
- Posts: 2251
- Joined: Wed Mar 08, 2006 8:47 pm
- Location: Hattingen, Germany
Re: Question about Program called Belle
If Bob and others are fine with that, I would like to collect such stuff about Belle, Blitz etc. in the chess programming wiki. Of course Bob, Steven (all the Lisp stuff), you and others are welcome to write something in the cpw as well. So far I put some historic stuff and external links (images) inside the wiki. Please feel free to register there and to correct and edit things and to create new pages. If you don't like to become member there, for whatever reason, I would like to quote relevant posts in cpw with reference.Carey wrote:Interesting bit of trivia.bob wrote:Belle didn't count nodes, so they could not display node counts or NPS on the terminal, so Joe rigged up an integrating counter that essentially counted pulses per second. He hot-wired it to the chess hardware clock, as each major tick was one node. It was interesting to watch when he first hooked it up as nobody had gone that fast.
Bob, I've mentioned this to you before and I'll probably keep doing it occasionally.... but you really should write a book or at least create a blog of some sort and write down all these bits of trivia and tidbits from 40 years of chess programming & tournaments.
For instance I have collected links to the CCC-Archives of the Symbolic stuff, Steven posted a few years ago:
http://chessprogramming.wikispaces.com/Symbolic
Critique and suggestions welcome...
Thanks,
Gerd
-
- Posts: 313
- Joined: Wed Mar 08, 2006 8:18 pm
Re: Question about Program called Belle
The design was good enough to be the best for several years, with just software tweaks. That far exceeded his development time. Sounds like a decent ROI.bob wrote:Perhaps a better question is "how many nodes per hour of development time did each get?" Designing hardware burns a lot of hours. And then the design is fixed and changing it is a bit of a pain compared to software.Carey wrote:Interesting bit of trivia.bob wrote:Belle didn't count nodes, so they could not display node counts or NPS on the terminal, so Joe rigged up an integrating counter that essentially counted pulses per second. He hot-wired it to the chess hardware clock, as each major tick was one node. It was interesting to watch when he first hooked it up as nobody had gone that fast.
Bob, I've mentioned this to you before and I'll probably keep doing it occasionally.... but you really should write a book or at least create a blog of some sort and write down all these bits of trivia and tidbits from 40 years of chess programming & tournaments.
Brag brag brag...But it didn't take long. In 84 we were doing 80K, by 85 we were doing 160K or so, and by 86 we had passed him although chiptest was just being built and took the NPS battle to a whole new level.
How many nodes per $ did Belle get compared to your $$$$$ Crays??![]()
As for how many nodes per hour of developement time.... Look at it this way... He could either do his real job that he was getting paid to do, or work on his hobby instead. Work vs. Play.

Even the few months that Hsu put into developing initial ChipTest circuit was reasonable. (And much of his time was actually spent doing manual chip layout because he didn't have the tools to do it automatically.)
If you have the hardware resources available to actually build it and it be competitive in performance to regular cpu's, then it looks like developing the hardware design itself is time well spent.
Getting back to Ken & Joe developing Belle... It is true it took then quite a while. They did two test versions just to test the concept. And they were pioneering the hardware ideas (where as Hsu could build upon what they did.)
So they did indeed spend a lot of time, but that was because they were first for what they were doing. (Belle was far enough beyond Cheops I'm not comparing the two.)
With the basic ideas already available (allowing for inovation, of course), and modern curcuit design tools, if you have cutting edge chip fabrication available cheaply, it certainly looks like it's a practical and time effective way to do it.
You do loose a bit of flexibility in the eval, of course, but the deeper search makes up for that.
Just do a new shrink version every couple years.... Don't even have to add new features if you don't want to.
You could even go speculative, like Greenblatt did. You have your regular chess program with smarts doing normal searching and you have the hardware searching deeper for tactics.
If Ken had wanted to, he could have built Belle with his own money. You probably couldn't have even seen a Cray for the money you had. (I'm still amazed that Cray spent so much money over the years supporting computer chess.)
Sounds like Ken won.
Of course, Hsu beat Ken on the nodes per $. It cost Hsu darn near nothing for the first version of ChipTest.
-
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Re: Question about Program called Belle
Come to think of it, it looks like you're right. I should have checked the report. If there was any move storage, it must have been in the LSI-11 part of the system.bob wrote:It had a transposition table, but no hash move. He had no way to feed a suggested move into the search. Deep Thought / Deep Blue had exactly the same issue, and in fact, the Deep * programs from Hsu didn't even use transposition tables in the hardware. DB2 had the ability, but he never built the chess boards with memory on them, although the processors could probe hash if they had it. But again, no hash move ordering The problem was that either searched nodes faster than they could be looked up.
From the Unix /etc/proverbs file:
"Pale ink is better than the best memory."
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Question about Program called Belle
Carey wrote:The design was good enough to be the best for several years, with just software tweaks. That far exceeded his development time. Sounds like a decent ROI.bob wrote:Perhaps a better question is "how many nodes per hour of development time did each get?" Designing hardware burns a lot of hours. And then the design is fixed and changing it is a bit of a pain compared to software.Carey wrote:Interesting bit of trivia.bob wrote:Belle didn't count nodes, so they could not display node counts or NPS on the terminal, so Joe rigged up an integrating counter that essentially counted pulses per second. He hot-wired it to the chess hardware clock, as each major tick was one node. It was interesting to watch when he first hooked it up as nobody had gone that fast.
Bob, I've mentioned this to you before and I'll probably keep doing it occasionally.... but you really should write a book or at least create a blog of some sort and write down all these bits of trivia and tidbits from 40 years of chess programming & tournaments.
Brag brag brag...But it didn't take long. In 84 we were doing 80K, by 85 we were doing 160K or so, and by 86 we had passed him although chiptest was just being built and took the NPS battle to a whole new level.
How many nodes per $ did Belle get compared to your $$$$$ Crays??![]()
OK, belle was a year-long project. It hit the streets in 1980 at the WCCC event that year and won it. In 1982 Belle and CB tied for first at the ACM event, and in 1983/1984 we won them all. So two years at the head of the pack, for one big year of development by at least two people. Tough call as far as ROI goes. Deep Thought, playing its first games in 1986 at the ACM (they didn't make the WCCC and had enough bugs they didn't do that well in 1986 at all) took over in 1987 and lasted until they stopped development.
The main issue is that it is far easier to try new ideas in software. Once you "cast it in silicon" it is fixed and is a major expense to change...
That has never impressed me. Seems to be much more reasonable to apply all available power to searching the complete tree. Sun Phoenix also did this approach with a "minix" tactical searcher that was much faster than the normal engine. And arbitrating between them is problematic.
As for how many nodes per hour of developement time.... Look at it this way... He could either do his real job that he was getting paid to do, or work on his hobby instead. Work vs. Play.![]()
Even the few months that Hsu put into developing initial ChipTest circuit was reasonable. (And much of his time was actually spent doing manual chip layout because he didn't have the tools to do it automatically.)
If you have the hardware resources available to actually build it and it be competitive in performance to regular cpu's, then it looks like developing the hardware design itself is time well spent.
Getting back to Ken & Joe developing Belle... It is true it took then quite a while. They did two test versions just to test the concept. And they were pioneering the hardware ideas (where as Hsu could build upon what they did.)
So they did indeed spend a lot of time, but that was because they were first for what they were doing. (Belle was far enough beyond Cheops I'm not comparing the two.)
With the basic ideas already available (allowing for inovation, of course), and modern curcuit design tools, if you have cutting edge chip fabrication available cheaply, it certainly looks like it's a practical and time effective way to do it.
You do loose a bit of flexibility in the eval, of course, but the deeper search makes up for that.
Just do a new shrink version every couple years.... Don't even have to add new features if you don't want to.
You could even go speculative, like Greenblatt did. You have your regular chess program with smarts doing normal searching and you have the hardware searching deeper for tactics.
If Ken had wanted to, he could have built Belle with his own money. You probably couldn't have even seen a Cray for the money you had. (I'm still amazed that Cray spent so much money over the years supporting computer chess.)
Sounds like Ken won.
Of course, Hsu beat Ken on the nodes per $. It cost Hsu darn near nothing for the first version of ChipTest.
-
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Question about Program called Belle
I still have some old belle output Ken sent me from time to time as we compared results. Belle was famous for the 2-move PVs it produced, as that was all it could show. The first two plies were software, the rest hardware. Output looks funny and must have made debugging a whole lot of fun. He could tell the hardware how deep to search, so I guess one could pass a value of "zero" which says captures-only + eval,..sje wrote:Come to think of it, it looks like you're right. I should have checked the report. If there was any move storage, it must have been in the LSI-11 part of the system.bob wrote:It had a transposition table, but no hash move. He had no way to feed a suggested move into the search. Deep Thought / Deep Blue had exactly the same issue, and in fact, the Deep * programs from Hsu didn't even use transposition tables in the hardware. DB2 had the ability, but he never built the chess boards with memory on them, although the processors could probe hash if they had it. But again, no hash move ordering The problem was that either searched nodes faster than they could be looked up.
From the Unix /etc/proverbs file:
"Pale ink is better than the best memory."
-
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
MacHack VI and Tech
I was once able to locate the object file for MacHack VI in an online archive of DECUS (DEC User Group). Since it was written in Midas (MIT developed assembler for the pdp-6), a simple reverse assembly will give you the uncommented source. All the program's strings are readable in the object text.
Many year ago I corresponded with Gillogly, the author of the CMU Technology Chess Program (I.e.: "TECH"). He mentioned that he still had the source, but it was on a old, old tape readable only by machinery that was no longer available outside a museum.
Greenblatt's MacHack VI report gives almost enough information to re-create the program except for missing data on the exact details of the plausible move scoring (functions and weights). Tech would be easy to re-create as it wasn't much more than a move generator with position update/downdate routines; the only thing unknown is the exact move ordering used.
Also of interest is the possibility of emulating Belle on current hardware, something I've thought of doing myself. You could take a four core high end desktop and have one thread per core with each thread simulating one of the four 16 square Belle chess arrays. If Ken were to provide the contents of the positional weight ROMs, then I think the emulation would be pretty good although not as fast as the real thing.
And then there's my Project 1792 machine. No, the name is not the year 1792, but from the fact that there are exactly 1,792 possible from/to square move combinations on a chessboard. (Add minor kludges for underpromotion and castling.) Build an array of 1,792 single chip processors and have each of these chips connected to another array of 1,792 chips for about three and a half million nodes in total. With a system like this, one could do a full two ply tree analysis with full minimax (not just A/B) in only a couple of cycles.
Many year ago I corresponded with Gillogly, the author of the CMU Technology Chess Program (I.e.: "TECH"). He mentioned that he still had the source, but it was on a old, old tape readable only by machinery that was no longer available outside a museum.
Greenblatt's MacHack VI report gives almost enough information to re-create the program except for missing data on the exact details of the plausible move scoring (functions and weights). Tech would be easy to re-create as it wasn't much more than a move generator with position update/downdate routines; the only thing unknown is the exact move ordering used.
Also of interest is the possibility of emulating Belle on current hardware, something I've thought of doing myself. You could take a four core high end desktop and have one thread per core with each thread simulating one of the four 16 square Belle chess arrays. If Ken were to provide the contents of the positional weight ROMs, then I think the emulation would be pretty good although not as fast as the real thing.
And then there's my Project 1792 machine. No, the name is not the year 1792, but from the fact that there are exactly 1,792 possible from/to square move combinations on a chessboard. (Add minor kludges for underpromotion and castling.) Build an array of 1,792 single chip processors and have each of these chips connected to another array of 1,792 chips for about three and a half million nodes in total. With a system like this, one could do a full two ply tree analysis with full minimax (not just A/B) in only a couple of cycles.
-
- Posts: 313
- Joined: Wed Mar 08, 2006 8:18 pm
Re: Question about Program called Belle
That was just the design itself.bob wrote:Carey wrote:The design was good enough to be the best for several years, with just software tweaks. That far exceeded his development time. Sounds like a decent ROI.
OK, belle was a year-long project. It hit the streets in 1980 at the WCCC event that year and won it. In 1982 Belle and CB tied for first at the ACM event, and in 1983/1984 we won them all. So two years at the head of the pack, for one big year of development by at least two people. Tough call as far as ROI goes.
Ken said that he could have simply rebuilt it with faster parts and more than doubled the speed. But it wasn't worth the effort by that time because his interest was waining.
Same thing would apply to chip designs and process shrinks. Once the design is done, you can 'coast' for a while by just doing feature shrinks.
(shrug) And how many bugs are you still finding in Crafty, after all these years? No difference between pure software or software / hardware combo.Deep Thought, playing its first games in 1986 at the ACM (they didn't make the WCCC and had enough bugs they didn't do that well in 1986 at all) took over in 1987 and lasted until they stopped development.
No disagreement there.The main issue is that it is far easier to try new ideas in software. Once you "cast it in silicon" it is fixed and is a major expense to change...
But you can make one heck of a tactical searcher with limited knowledge, and you can keep it updated for at least a few generations until your next generation is finsihed.
And the basic AB search method will likely be microcoded, which can be changed, so you can still experiment with search ideas. You could do the same kind of search experiments you've been doing over in the programmer's section. The only difference is instead of C, you'd use some other language that would convert to microcode, and you'd upload that.
As to making the eval more flexibile... (shrug) Well, I'm not a hardware developer but I'd think you could make at least a small part of it programmable so you could do an eval with a little bit of smarts that could be reprogrammed as you developed new ideas and retune the eval params.
Not quite as flexible as a pure software aproach, but probably workable.
That has never impressed me. Seems to be much more reasonable to apply all available power to searching the complete tree. Sun Phoenix also did this approach with a "minix" tactical searcher that was much faster than the normal engine. And arbitrating between them is problematic.You could even go speculative, like Greenblatt did. You have your regular chess program with smarts doing normal searching and you have the hardware searching deeper for tactics.
[/quote]
I was simply suggesting a way to keep value in last year's hardware, for example.
No need to throw it away if it's still fast enough to provide some value as a secondary searcher.
It might not be as fast or smart as this years, but it's not garbage either.
-
- Posts: 313
- Joined: Wed Mar 08, 2006 8:18 pm
Re: MacHack VI and Tech
Right, it's still around.sje wrote:I was once able to locate the object file for MacHack VI in an online archive of DECUS (DEC User Group). Since it was written in Midas (MIT developed assembler for the pdp-6), a simple reverse assembly will give you the uncommented source. All the program's strings are readable in the object text.
Doing a reasonable disassembly would be no trivial task though.
You could get a raw disassembly, but that wouldn't show the algorithms.
To do it right would take a massive effort.
Doesn't have it anymore. I've checked with him for my classic chess programm website.Many year ago I corresponded with Gillogly, the author of the CMU Technology Chess Program (I.e.: "TECH"). He mentioned that he still had the source, but it was on a old, old tape readable only by machinery that was no longer available outside a museum.
He thought he did, but he hunted around and couldn't find any tapes with it.
If enough info could be gathered, you might be able to, but I'm not sure it'd be easy.Greenblatt's MacHack VI report gives almost enough information to re-create the program except for missing data on the exact details of the plausible move scoring (functions and weights).
It might be easier if Mr. Greenblatt was willing to help by looking for the old source, or at least going into more details about what wasn't described.
I was never able to get hold of him, but you can try.
But from what I gather, he's not interested. He gave an interview to the ComputerHistory.org people, but that was as far as he was willing to go. He wasn't willing to look through his stuff to see if he even had a copy of the source anymore.
Mr. Gillogly said that his classic paper gives gives basically all the details needed. That the changes he made afterwards were more refinement and implementation details.Tech would be easy to re-create as it wasn't much more than a move generator with position update/downdate routines; the only thing unknown is the exact move ordering used.
You could probably ask him about the move ordering, if you really wanted to recreate it as close as possible. You might be able to infer that from some of the game records, too.
Ken said he no longer has schematics for it or even the programs etc. for it.Also of interest is the possibility of emulating Belle on current hardware, something I've thought of doing myself.
The tapes that held the software stuff degraded beyond readability.
You could take a four core high end desktop and have one thread per core with each thread simulating one of the four 16 square Belle chess arrays. If Ken were to provide the contents of the positional weight ROMs, then I think the emulation would be pretty good although not as fast as the real thing.
And then there's my Project 1792 machine. No, the name is not the year 1792, but from the fact that there are exactly 1,792 possible from/to square move combinations on a chessboard. (Add minor kludges for underpromotion and castling.) Build an array of 1,792 single chip processors and have each of these chips connected to another array of 1,792 chips for about three and a half million nodes in total. With a system like this, one could do a full two ply tree analysis with full minimax (not just A/B) in only a couple of cycles.
-
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Lost in the past
The sources for the Bernstein Chess Program, the Kotov Chess Program, MacHack VI, TECH, etc., all lost in the mists of the past.
But somehow I'd say that a fifty years from now it will still be possible to download a copy of Crafty. Maybe locating the program will be an assignment in Software Archeology 101.
Says something about open source, doesn't it?
Fifty years from now most of us will be dead or better off dead. Like Ozymandias, have you thought about what works of yours will survive?
But somehow I'd say that a fifty years from now it will still be possible to download a copy of Crafty. Maybe locating the program will be an assignment in Software Archeology 101.
Says something about open source, doesn't it?
Fifty years from now most of us will be dead or better off dead. Like Ozymandias, have you thought about what works of yours will survive?