Assuming you're not a creationist, you have to say that natural evolution has come up with a large number of absolutely incredible optimisations and adaptations - and it hasn't had long to do it. Even if you say "50 million years", and even if an animal's reproductive cycle is only one year - that's still only 50 million cycles - which is nothing in comparison with what a computer could do with a genetic algorithm in a short time.
Thinking about this, there's a glaringly obvious explanation for how this is possible: the mechanism for animal evolution must itself be evolving - as much an adaptation as any other adaptation (especially for animals that exist in an environment that tends to change a lot).
If this is correct, then it gives us a potential new way to build an algorithm to play chess using genetic algorithms: we need a genetic algorithm that itself evolves! A genetic algorithm to evolve the genetic algorithm!
Any thoughts?
Insight About Genetic (Evolutionary) Algorithms
Moderator: Ras
-
- Posts: 12255
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
- Full name: Graham Laight
Insight About Genetic (Evolutionary) Algorithms
Want to attract exceptional people? Be exceptional.
-
- Posts: 89
- Joined: Sat Sep 13, 2014 4:12 pm
- Location: Zagreb, Croatia
- Full name: Branko Radovanović
Re: Insight About Genetic (Evolutionary) Algorithms
Indeed, the organism is evolving and the evolution mechanism itself (the "hyperparameters", in a sense) is also evolving. If we look at human cognition, the parameters, the hyperparameters, and the physical substrate are all evolving together, at the same time.
This brings me to an aspect of computer chess - related to the "zero" approach, in particular - that I feel has been overlooked. The AlphaZero concept was described as "tabula rasa reinforcement learning from games of self-play". But, if we look at the components of a modern implementation (LC0), we find this:
It is abundantly clear now that some major flaws of LLM AIs are due to them being all eval and no search. When you try to use ChatGPT for a multi-step task, it's sometimes like LC0 at depth 1: plays it surprisingly smart at first, then blows it completely because it doesn't see a simple mate in 2. Now techniques such as Tree of Thoughts[1] are emerging, and I wouldn't be surprised at all if LLM AIs began to use some kind of PUCT-like algorithm a couple of years down the road.
But: it's still going to be hand-coded search in both LC0 and in AI. Humans don't do it that way. Something is missing.
[1] https://arxiv.org/abs/2305.10601
This brings me to an aspect of computer chess - related to the "zero" approach, in particular - that I feel has been overlooked. The AlphaZero concept was described as "tabula rasa reinforcement learning from games of self-play". But, if we look at the components of a modern implementation (LC0), we find this:
- Evaluation (value head) - neural network
- Move ordering (policy head) - neural network
- Search (PUCT) - hand-coded algorithm
- Time management - hand-coded algorithm

It is abundantly clear now that some major flaws of LLM AIs are due to them being all eval and no search. When you try to use ChatGPT for a multi-step task, it's sometimes like LC0 at depth 1: plays it surprisingly smart at first, then blows it completely because it doesn't see a simple mate in 2. Now techniques such as Tree of Thoughts[1] are emerging, and I wouldn't be surprised at all if LLM AIs began to use some kind of PUCT-like algorithm a couple of years down the road.
But: it's still going to be hand-coded search in both LC0 and in AI. Humans don't do it that way. Something is missing.
[1] https://arxiv.org/abs/2305.10601
-
- Posts: 12255
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
- Full name: Graham Laight
Re: Insight About Genetic (Evolutionary) Algorithms
Excellent post, Branko!
You can have a go at the referenced 24 game online here - link. I struggled a bit on some of the questions, but I think that if I played it a lot, I'd soon master it: in the end, there are only 4!*4^3=1536 permutations in any single game (it would be trivial to solve by brute force).
While it's churlish to criticise ChatGPT after it astonished us with its big leap forward in November, it is also clear that we still have a long way to go in the AI journey: when code can learn how to learn, and can adapt its adaptivity, we've probably got a period of very impressive achievements ahead!
You can have a go at the referenced 24 game online here - link. I struggled a bit on some of the questions, but I think that if I played it a lot, I'd soon master it: in the end, there are only 4!*4^3=1536 permutations in any single game (it would be trivial to solve by brute force).
While it's churlish to criticise ChatGPT after it astonished us with its big leap forward in November, it is also clear that we still have a long way to go in the AI journey: when code can learn how to learn, and can adapt its adaptivity, we've probably got a period of very impressive achievements ahead!
Want to attract exceptional people? Be exceptional.
-
- Posts: 2086
- Joined: Wed Jul 13, 2011 9:04 pm
- Location: Madrid, Spain.
Re: Insight about Genetic (Evolutionary) Algorithms.
Hello:
Facts about 24 math game
Dificulty levels of the puzzles
4 Numbers game, all solvables for 24
Although I think you are talking about a different thing: once one of the 1820 situations are presented to us, then there are 1536 ways of combine the four numbers with +-×÷ operations. It makes sense: 4 possible operations available and you use them three times, so 4³ = 64; then, you have 4 numbers that can be ordered in 4! = 24 ways at most, because if you have duplicate numbers, the number of permutations will come down, which is explained in the first link I provided above ({a,a,a,a}, {a,a,a,b} and so on).
This is a simple game that returns us to the basics, though you can have a little longer thought on some situations.
Regards from Spain.
Ajedrecista.
The own web gives 1820 possible situations with 1362 of them solvables. Links:towforce wrote: ↑Tue Jun 13, 2023 8:50 pm[...]
You can have a go at the referenced 24 game online here - link. I struggled a bit on some of the questions, but I think that if I played it a lot, I'd soon master it: in the end, there are only 4!*4^3=1536 permutations in any single game (it would be trivial to solve by brute force).
[...]
Facts about 24 math game
Dificulty levels of the puzzles
4 Numbers game, all solvables for 24
Although I think you are talking about a different thing: once one of the 1820 situations are presented to us, then there are 1536 ways of combine the four numbers with +-×÷ operations. It makes sense: 4 possible operations available and you use them three times, so 4³ = 64; then, you have 4 numbers that can be ordered in 4! = 24 ways at most, because if you have duplicate numbers, the number of permutations will come down, which is explained in the first link I provided above ({a,a,a,a}, {a,a,a,b} and so on).
This is a simple game that returns us to the basics, though you can have a little longer thought on some situations.
Regards from Spain.
Ajedrecista.
-
- Posts: 3169
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: Insight About Genetic (Evolutionary) Algorithms
Very predictive Mr. Radovanovic!Branko Radovanovic wrote: ↑Tue Jun 13, 2023 10:24 am [...]
It is abundantly clear now that some major flaws of LLM AIs are due to them being all eval and no search. When you try to use ChatGPT for a multi-step task, it's sometimes like LC0 at depth 1: plays it surprisingly smart at first, then blows it completely because it doesn't see a simple mate in 2. Now techniques such as Tree of Thoughts[1] are emerging, and I wouldn't be surprised at all if LLM AIs began to use some kind of PUCT-like algorithm a couple of years down the road.
[....]
Demis Hassabis says the company is working on a system called Gemini that will tap techniques that helped AlphaGo defeat a Go champion in 2016.
https://www.wired.com/story/google-deep ... s-chatgpt/
--DeepMind’s Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems.
Srdja
-
- Posts: 12255
- Joined: Thu Mar 09, 2006 12:57 am
- Location: Birmingham UK
- Full name: Graham Laight
Re: Insight About Genetic (Evolutionary) Algorithms
smatovic wrote: ↑Mon Jun 26, 2023 4:25 pmDemis Hassabis says the company is working on a system called Gemini that will tap techniques that helped AlphaGo defeat a Go champion in 2016.
https://www.wired.com/story/google-deep ... s-chatgpt/
DeepMind’s Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems.
Excellent article! I look forward to the finished product avidly!
As I've said previously, though, having seen new versions of (or similar programs to) AlphaGo get beaten by a middling unaided human because the NN didn't understand really basic concepts about the game (it just knows a very large number of simple patterns), I think it's waaaaay too soon to speculate that AI might need to be regulated. Having said that, LLMs like Bard and ChatGPT are already very useful - like 8-bit microprocessor chess computers were.
Want to attract exceptional people? Be exceptional.