Checkers Solved - Chess around year 2060-2070!

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
Terry McCracken

Re: Checkers Solved - Chess around year 2060-2070!

Post by Terry McCracken » Sun Jul 22, 2007 4:41 pm

bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
M ANSARI wrote:I think that chess will be solved but not by conventional thinking. It already seems that chess is a draw because till today not a single opening has been found that guarantees a win with best play. While it can get a win OTB, others can always find refutations and manage to equalize. So it seems from that perspective that chess is drawn, otherwise with all the millions of games one opening that will always win would have been found.

So now we look for a chess entity that will never lose. I cannot see this as being a big deal within 10 or 20 years. Improvements in software and more importantly exponential improvements in hardware would seem to make a chess entity that never loses very possible. While that may not satisfy many who want to see every single chess move possible calculated, it sure would convince me that chess is solved. If I could see a computer play say 100,000 games without a single loss against all comers including Centaurs ... that would be quite convincing.
I agree...we will prove chess is a draw, and we don't need to explore all possibilities to determine that....
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
Jonathan didn't find anything. best-first has been around forever, and is the logical way to search if your goal is to reach an endgame database on each branch for a proof...
Whatever Robert...

Terry McCracken

Re: Checkers Solved - Chess around year 2060-2070!

Post by Terry McCracken » Sun Jul 22, 2007 4:44 pm

bob wrote:
Terry McCracken wrote:
George Tsavdaris wrote:
Terry McCracken wrote:
bob wrote:
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
What other way?
He pruned out the BS, concentrated on wins and draws etc. He reduced the problem by a huge number of useless positions, otherwise he would have never demonstrated with the technology at his disposal that checkers is a draw if played perfectly.

We are in our infancy as far as technology is concerned.
He didn't "prune" a thing. He uses a best first search that simply searches and stores the tree as it is built. Once a node hits the endgame databases, it is "closed" and never used again. This slowly reduces the number of "open" nodes until each and every one has reached the endgame databases where you are done.

This is not a game-playing strategy that works anywhere near as well as alpha/beta, unless you can search deeply enough to reach the endgame tables eventually. Which we can't and never will be able to do in chess.
Right....Sure Robert..

Terry McCracken

Re: Checkers Solved - Chess around year 2060-2070!

Post by Terry McCracken » Sun Jul 22, 2007 4:55 pm

bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
bob wrote:
that 2060 stuff shows such an incredible lack of comprehension that it really doesn't deserve a comment at all. It is a ridiculous statement. Only down-side is that I doubt I will live long enough for the idiocy of that statement to be proven. I'm almost 60 now. I'd need to live past 120 to see that fallacy put to rest...

chess won't be solved by 2060. Or even 2160.
Robert..Never say Never! I think throwing out a number like that was irresponsibly stupid as well, but we don't know when or exactly how chess will be solved. However, I do believe it's possible with the right technology and methods.

Terry
Simply not possible with any conceivable approach. More chess positions than atoms in the universe, by a _large_ margin. Even using quantum states to store multiple bits per atom would not be possible as there are not enough states.

This is something that simply is not going to happen. Even a density of one billion times one billion times greater than today's chips won't even come close...
None of that matters! You're being myopic. First you can prune 99% + of all positions on the board. Quantum computers when fully developed and very advanced and practical will be able to compute at speeds that are inconceivable to anything you've experienced.
Not being "myopic" at all. Do you have any idea what 1% of a tree that large is? Hint: It is _not_ a small number. 1% = .01 which is close to 1/2^8.

What do you get if you divide 2^160 by 2^8? 2^152.

the math is _daunting".



They may even be able to actually connect to parallel universes and work in tandem, so yes that technology could indeed be used to solve chess.
I thought we were talking real-world scenarios? Not science-fiction...


Even Jonathan realizes this!

I'm tired of that can't be done crap...that's what truly is absurd! It holds back scientific and technological progress!
I'm equally tired of the "this will one day be doable" when it is so obvious it will not be done. We knew checkers would be solved 30 years ago, we just didn't know when. No serious researcher says that chess will be solved by 2060 or at any point in the future. Wonder why that is?


You're a computer scientist, but you've but up barriers to things that are so different to your understanding and make false comparisons to the past evolution of computers. Well, the next 50 years will move much faster than the last 50 years. That's a fact!
based on what? I bought a 2.8ghz processor 4 years ago. You can almost buy 4.0 ghz today. 4 years, not a factor of two. That's a fact...


I have seen the impossible be done and I'll see it again!
You have _never_ seen something "impossible" done. Nobody has, for obvious reasons.


Terry

I've seen you post... :roll:


Robert, you are not an expert in quantum computing, that is obvious. It will be done, it's not science fiction!
Please read more carefully. The "science fiction" applied to your "connecting parallel universes". That's not reality. In fact, there is nothing that suggests such things actually exist, other than in the minds of the great science fiction authors.

quantum computing exists. In a useless form today. But even if it becomes a reality, you are still going to be dividing a huge number (potential search space) with a small number (quantum computing speedup). You still end up with a huge number (time required to search that space, even if it is searched ridiculously fast.)
I read it fine Robert, and parallel universes are not ideas of Sci/Fi writers.

Please do some homework on this. It dates back to the 30's and much more was postulated in the 50's, and it has become serious science in the 90's to today.

You can't see past the numbers, and that is your error.
Nobody can see past the numbers. You can choose to ignore them if you wish. But to say "chess can be solved with an as yet unknown and undeveloped technology" is _not_ much of an argument. To say that if current technological improvements continue it will be solved in 60 or 600 or 6000 years is simply silly.

So I am not buying into the idea that some new technology _might_ come along that will solve chess. That's 100% speculation. As a scientist, I have to rely on what we have today, and extrapolate what is actually _feasible_ as far as future developments go, and then base conclusions on that. Not science-fiction never-never-land fairy-tale stuff.
That's just a lot of bull! You are an impossible man who can't envision technologies way beyond what exists today. The ideas are out there but you're trapped by what you know and scientific dogma. You're afraid to step out of your realm of understanding, and scientists must do this or great ideas can't be realized.

I'm sure you think it's impossible to find a method to circumvent the speed of light denying the possibility of intergalactic travel etc.
Most likely that is absolutely true. Warp drive and wormholes not withstanding, along with the occasional klingon bird of prey.

You have no idea...

You would have said in 1956 that computers could never play chess at the master level.
Quite the contrary. I've said (as have many others) for at least 30+ years that computers becoming invincible in chess was an inevitable thing.

Whatever....

You have to see beyond what you know. Einstein did look beyond the numbers!
Now you are out of it. Einstein looked _at_ the numbers, being one of the greatest math/physics minds of all time.

You're Wrong! Einstein saw with his mind, the numbers were secondary....you're out of it...

That's what made him great!

One thing is certain, you won't find a way to approach the problem.
Nor will anyone else, so what?
You're Wrong! You mock what is beyond YOU!

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Checkers Solved - Chess around year 2060-2070!

Post by bob » Sun Jul 22, 2007 6:45 pm

Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
George Tsavdaris wrote:
Terry McCracken wrote:
bob wrote:
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
What other way?
He pruned out the BS, concentrated on wins and draws etc. He reduced the problem by a huge number of useless positions, otherwise he would have never demonstrated with the technology at his disposal that checkers is a draw if played perfectly.

We are in our infancy as far as technology is concerned.
He didn't "prune" a thing. He uses a best first search that simply searches and stores the tree as it is built. Once a node hits the endgame databases, it is "closed" and never used again. This slowly reduces the number of "open" nodes until each and every one has reached the endgame databases where you are done.

This is not a game-playing strategy that works anywhere near as well as alpha/beta, unless you can search deeply enough to reach the endgame tables eventually. Which we can't and never will be able to do in chess.
Right....Sure Robert..
Fortunately, _one_ of knows what he is talking about here. Do some reading and you will too. best first has been around forever.

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Checkers Solved - Chess around year 2060-2070!

Post by bob » Sun Jul 22, 2007 6:47 pm

Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
bob wrote:
Terry McCracken wrote:
M ANSARI wrote:I think that chess will be solved but not by conventional thinking. It already seems that chess is a draw because till today not a single opening has been found that guarantees a win with best play. While it can get a win OTB, others can always find refutations and manage to equalize. So it seems from that perspective that chess is drawn, otherwise with all the millions of games one opening that will always win would have been found.

So now we look for a chess entity that will never lose. I cannot see this as being a big deal within 10 or 20 years. Improvements in software and more importantly exponential improvements in hardware would seem to make a chess entity that never loses very possible. While that may not satisfy many who want to see every single chess move possible calculated, it sure would convince me that chess is solved. If I could see a computer play say 100,000

games without a single loss against all comers including Centaurs ... that would be quite convincing.
I agree...we will prove chess is a draw, and we don't need to explore all possibilities to determine that....
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
Jonathan didn't find anything. best-first has been around forever, and is the logical way to search if your goal is to reach an endgame database on each branch for a proof...
Is that somehow supposed to further the conversation? Or are you just in so far over your head that you resort to babble?? Why don't you google "best-first search" and "depth-first search" to see how the latter is used in alpha/beta, and why the former is the right solution for what Jonathan was trying to do?




Whatever Robert...

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Checkers Solved - Chess around year 2060-2070!

Post by bob » Sun Jul 22, 2007 6:49 pm

You simply wave your hands and say "but it will be solved, if we can develop a parallel-universe memory system, or some type of quantum-computing that has not yet been conceived, or dream up some completely new and never even imagined way to attack the game.

the rest of us look at what we have, where it came from, and use that _real_ data to predict the future based solely on what appears to be reasonable, not what someone can dream up [deleted] or whatever...

Uri Blass
Posts: 8972
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: Checkers Solved - Chess around year 2060-2070!

Post by Uri Blass » Sun Jul 22, 2007 6:56 pm

bob wrote:
Terry McCracken wrote:
George Tsavdaris wrote:
Terry McCracken wrote:
bob wrote:
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
What other way?
He pruned out the BS, concentrated on wins and draws etc. He reduced the problem by a huge number of useless positions, otherwise he would have never demonstrated with the technology at his disposal that checkers is a draw if played perfectly.

We are in our infancy as far as technology is concerned.
He didn't "prune" a thing. He uses a best first search that simply searches and stores the tree as it is built. Once a node hits the endgame databases, it is "closed" and never used again. This slowly reduces the number of "open" nodes until each and every one has reached the endgame databases where you are done.

This is not a game-playing strategy that works anywhere near as well as alpha/beta, unless you can search deeply enough to reach the endgame tables eventually. Which we can't and never will be able to do in chess.
The same technique can work in chess if you have enough memory.
We do not know how much memory is enough and it is possible that 10^30 bits are enough.

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Checkers Solved - Chess around year 2060-2070!

Post by bob » Sun Jul 22, 2007 7:35 pm

Uri Blass wrote:
bob wrote:
Terry McCracken wrote:
George Tsavdaris wrote:
Terry McCracken wrote:
bob wrote:
Sorry, but you can't _prove_ until you do search all pathways. That's the very definition of proof. "we think" does not mean "it is".
Funny Jonathan found a better way!
What other way?
He pruned out the BS, concentrated on wins and draws etc. He reduced the problem by a huge number of useless positions, otherwise he would have never demonstrated with the technology at his disposal that checkers is a draw if played perfectly.

We are in our infancy as far as technology is concerned.
He didn't "prune" a thing. He uses a best first search that simply searches and stores the tree as it is built. Once a node hits the endgame databases, it is "closed" and never used again. This slowly reduces the number of "open" nodes until each and every one has reached the endgame databases where you are done.

This is not a game-playing strategy that works anywhere near as well as alpha/beta, unless you can search deeply enough to reach the endgame tables eventually. Which we can't and never will be able to do in chess.
The same technique can work in chess if you have enough memory.
We do not know how much memory is enough and it is possible that 10^30 bits are enough.
It doesn't work in chess for the simple reason it searches x^2 the nodes that alpha-beta searches. That's the reason for depth-first, to avoid storing the tree, and then alpha/beta reduces the search space by sqrt(n).

10^30 bits are more than all the memory ever made added together. If you figure one billion computers with one gigabyte of memory each, that is 1 x 10^18. You still need 1024 times that many machines to get there. And I personally do not believe 10^30 will scratch the surface...

Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 9:01 pm
Location: Irvine, CA, USA

Re: Checkers Solved - Chess around year 2060-2070!

Post by Dirt » Sun Jul 22, 2007 10:03 pm

bob wrote:
Dirt wrote:
cms271828 wrote:I just solved chess, on the back of my hand, cool!!

But seriously, I read somewhere there were 10^125 possible games, and that a quantum computer could do it in a year, but its nonsense.

I think theres 10^80 or a bit less atoms in universe, so you need 10^45 universes of atoms to equal number of possible games.

I'm not sure exactly how quantum computers and qubits work, but obviously it wont be good enough.

Even if you goto superstring level, I don't think theres enough of em in an atom to get anywhere near, and there not exactly easy to see :D

The only way I can think of is by taking short cuts, kind of like in alpha-beta pruning, you dont need to check everything.

But I cant really see how that would work, so I would say impossible.
As has been pointed out elsewhere, they're "only" about 10^45 different legal chess positions. This means it takes only a trivial amount of storage (in terms of the size of the universe) for a complete set of 32 piece tablebases.
It is going to take a table that has an index size of 2^160 or so. which is where that 10^50 comes from. I'm not sure how you are going to store that big a file. Something tells me that before you get even halfway there, the storage device will turn into a black hole, collapse in upon itself, and all the data stored will be lost as the gravitational well compresses things beyond comprehension.

talk of files that large actually _are_ impossible. So there's no hope of storing a 32-piece endgame table, nor of building it and the 31-30-29-...-7 piece tables needed to work your way up there. That leaves raw computation as the only solution. And nothing is going to search at the speed needed to traverse the complete game tree. If you operate at 1ns, the longest pathway you can have between two components is 12 inches, the distance light travels in one ns. If you want a picosecond machine, the distance between any two components just shrank to .012". Another factor of 1 million and now the entire thing has to fit in the space occupied by a single atom. And what then?

It's interesting to talk about such ideas, but it is ridiculous to seriously think this will happen.


Even with the ability to store the data, actually calculating what to store would still have to be done, but given the life of the universe to finish it should be possible. However, it's hard believe so much effort would be devoted to such an ultimately pointless task.
I meant to say 10^46. There was a convincing expert who posted here a while back with that number.

But let's take 10^50. What will it take to store such a table? Consider using carbyne (...C=C=C=C...) rods for storage, with C12 for a zero and C13 for a one. This lets us store one bit in an average of 12.5 daltons. There needs to be some overhead for reading and managing the rods, so perhaps overall we might need 100 daltons per bit.

The mass needed to store 10^50 bits would then be 100*10^50/6.02x10^23 grams, or about 1.7x10^28 grams. This is less than three times the mass of the earth, well short of a black hole. Of course tablebases tend to be highly compressible, which should reduce this substantially.

I'm unsure how long it would take to actually calculate the tables. If the potential parallelization is poor and the locality of table access is weak the usable life of the Universe might become an issue, but I doubt it would come to that.

bob
Posts: 20923
Joined: Mon Feb 27, 2006 6:30 pm
Location: Birmingham, AL

Re: Checkers Solved - Chess around year 2060-2070!

Post by bob » Mon Jul 23, 2007 5:34 am

Dirt wrote:
bob wrote:
Dirt wrote:
cms271828 wrote:I just solved chess, on the back of my hand, cool!!

But seriously, I read somewhere there were 10^125 possible games, and that a quantum computer could do it in a year, but its nonsense.

I think theres 10^80 or a bit less atoms in universe, so you need 10^45 universes of atoms to equal number of possible games.

I'm not sure exactly how quantum computers and qubits work, but obviously it wont be good enough.

Even if you goto superstring level, I don't think theres enough of em in an atom to get anywhere near, and there not exactly easy to see :D

The only way I can think of is by taking short cuts, kind of like in alpha-beta pruning, you dont need to check everything.

But I cant really see how that would work, so I would say impossible.
As has been pointed out elsewhere, they're "only" about 10^45 different legal chess positions. This means it takes only a trivial amount of storage (in terms of the size of the universe) for a complete set of 32 piece tablebases.
It is going to take a table that has an index size of 2^160 or so. which is where that 10^50 comes from. I'm not sure how you are going to store that big a file. Something tells me that before you get even halfway there, the storage device will turn into a black hole, collapse in upon itself, and all the data stored will be lost as the gravitational well compresses things beyond comprehension.

talk of files that large actually _are_ impossible. So there's no hope of storing a 32-piece endgame table, nor of building it and the 31-30-29-...-7 piece tables needed to work your way up there. That leaves raw computation as the only solution. And nothing is going to search at the speed needed to traverse the complete game tree. If you operate at 1ns, the longest pathway you can have between two components is 12 inches, the distance light travels in one ns. If you want a picosecond machine, the distance between any two components just shrank to .012". Another factor of 1 million and now the entire thing has to fit in the space occupied by a single atom. And what then?

It's interesting to talk about such ideas, but it is ridiculous to seriously think this will happen.


Even with the ability to store the data, actually calculating what to store would still have to be done, but given the life of the universe to finish it should be possible. However, it's hard believe so much effort would be devoted to such an ultimately pointless task.
I meant to say 10^46. There was a convincing expert who posted here a while back with that number.

But let's take 10^50. What will it take to store such a table? Consider using carbyne (...C=C=C=C...) rods for storage, with C12 for a zero and C13 for a one. This lets us store one bit in an average of 12.5 daltons. There needs to be some overhead for reading and managing the rods, so perhaps overall we might need 100 daltons per bit.

The mass needed to store 10^50 bits would then be 100*10^50/6.02x10^23 grams, or about 1.7x10^28 grams. This is less than three times the mass of the earth, well short of a black hole. Of course tablebases tend to be highly compressible, which should reduce this substantially.

I'm unsure how long it would take to actually calculate the tables. If the potential parallelization is poor and the locality of table access is weak the usable life of the Universe might become an issue, but I doubt it would come to that.
I don't follow your math. 10^50 bits requires at least 10^50 atoms, although in your case this appears to average 12.5 atoms per bit if I understand your measure correctly. You are way beyond the number of atoms in planet earth at that number.

The issue would become one of size. No way to run at sub-picosecond clock speeds with a storage device far larger than planet earth. just using 8K miles is daunting, as 8K miles = 8,000 * 5280, which if my math works turns into 40 million feet. Or 40 million nanoseconds to propogate any sort of energy. That is 40 thousand microseconds, or 40 milliseconds. Not very fast. that's one of the limiting issues when we start down the slippery slope of such a large storage device... If it is big enough, it will be too slow. If it is fast enough, it will be too small.

Based on sensible math, this simply won't ever be feasible.

Post Reply