There have been GPT-3 and chess approaches, but these are not what I meant, they assume PGN in training data and GPT-3 to predict the next move as a string in line:
"GPT-3, Play Chess!"
https://towardsdatascience.com/gpt-3-pl ... 23a96096a9
"How this AI expert taught GPT-3 to play chess"
https://analyticsindiamag.com/how-this- ... lay-chess/
"A Very Unlikely Chess Game"
https://web.archive.org/web/20200618041 ... hess-game/
A shortened descriptive approach did not work well:
"OpenAI's GPT-3 neural net attempts to play chess"
https://i.redd.it/f3e0y0j1xn951.png
--
Srdja
LaMDA - but honey, can it play chess?
Moderator: Ras
-
- Posts: 3227
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
-
- Posts: 2583
- Joined: Mon Feb 08, 2016 12:43 am
- Full name: Brendan J Norman
Re: LaMDA - but honey, can it play chess?
I think realistically, such a tool is probably being used by the U.S "intelligence community" for narrative management on social media, with the aim of manufacturing mass consent/consensus for U.S foreign policy abroad.
As well as alienating dissenting opinions via a crowd of bullying bots which simulates mass consensus.
Such bots are already indistinguishable from human trolls on social media I'd guess.
And we know that Google has always had intelligence/DARPA links (as has Facebook).
This also makes it very interesting to see whether Elon Musk is right about the overwhelming amount of bot accounts on Twitter.
Is he being attacked now in Tweets, by the very bots he seeks to have removed?
Kinda spooky, if so.
But read a book by a retired CIA officer and you will not be surprised by such a thing.
As well as alienating dissenting opinions via a crowd of bullying bots which simulates mass consensus.
Such bots are already indistinguishable from human trolls on social media I'd guess.
And we know that Google has always had intelligence/DARPA links (as has Facebook).
source: https://qz.com/1145669/googles-true-ori ... veillance/Two decades ago, the US intelligence community worked closely with Silicon Valley in an effort to track citizens in cyberspace. And Google is at the heart of that origin story. Some of the research that led to Google’s ambitious creation was funded and coordinated by a research group established by the intelligence community to find ways to track individuals and groups online.
This also makes it very interesting to see whether Elon Musk is right about the overwhelming amount of bot accounts on Twitter.
Is he being attacked now in Tweets, by the very bots he seeks to have removed?
Kinda spooky, if so.
But read a book by a retired CIA officer and you will not be surprised by such a thing.
-
- Posts: 3227
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: LaMDA - but honey, can it play chess?
Those who sit behind the great firewall shall not throw surveillances stones onto others, or alike 
I've heard the CCP does not like some certain comic figures that much...I myself am more concerned about all those workers in the Macedonian Troll-Fabrics loosing their jobs to GPT-3.
Interesting read:
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize!"
But I would prefer to stay more on topic, how to make use of Transformer NLP neural networks for chess? The new Nvidia Hopper arch has meanwhile some kind of Transformer engines onboard, beside TensorCores, to boost these.
And if it should turn out that LaMDA, or a next version, is really conscious, it would be interesting to know how/if it can play a game of chess.
--
Srdja

I've heard the CCP does not like some certain comic figures that much...I myself am more concerned about all those workers in the Macedonian Troll-Fabrics loosing their jobs to GPT-3.
Interesting read:
"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize!"
https://www.heise.de/meinung/LaMDA-AI-a ... 48207.htmlOr, to say it with Ludwig Wittgenstein: We have no conditions allowing us to call machines conscious. Even if a machine would have consciousness, we cannot determine if this is true, since we never sufficiently defined the concept of consciousness. That's why we make our assumption on behavior and save ourselves from drawing a border that separates conscious life from unconscious things.
But I would prefer to stay more on topic, how to make use of Transformer NLP neural networks for chess? The new Nvidia Hopper arch has meanwhile some kind of Transformer engines onboard, beside TensorCores, to boost these.
And if it should turn out that LaMDA, or a next version, is really conscious, it would be interesting to know how/if it can play a game of chess.
--
Srdja
-
- Posts: 2583
- Joined: Mon Feb 08, 2016 12:43 am
- Full name: Brendan J Norman
Re: LaMDA - but honey, can it play chess?
You see, friend...this is called whataboutism.
Nobody was talking about China.Whataboutism or whataboutery (as in "what about…?") denotes in a pejorative sense a procedure in which a critical question or argument is not answered or discussed, but retorted with a critical counter-question which expresses a counter-accusation. From a logical and argumentative point of view it is considered a variant of the Tu-quoque pattern (Latin 'you too', term for a counter-accusation), which is a subtype of the Ad-hominem argument.
The topic is Google, an AMERICAN company.
And I shared some thoughts on what I think the "almost sentient" AI chat bots are being used for.
No more, no less...
Try to stay on topic, okay?

[If you want to share some narrow, heavily prejudiced/whitewashed views on China (let's face it....you do), let's do it elsewhere. Not fair to derail the thread.

-
- Posts: 3227
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: LaMDA - but honey, can it play chess?
If you prefer to talk only about American use of technology, it's okay, but do not snip my comment and call ad hominem.
...I gave you a hook to enter on topic thread, you did not use it, maybe you should have stayed on topic in the first place? No offence.
--
Srdja
...I gave you a hook to enter on topic thread, you did not use it, maybe you should have stayed on topic in the first place? No offence.
--
Srdja
-
- Posts: 2583
- Joined: Mon Feb 08, 2016 12:43 am
- Full name: Brendan J Norman
Re: LaMDA - but honey, can it play chess?
1. Hook? Topic? What are you talking about? Rephrase for English speakers.smatovic wrote: ↑Fri Jun 24, 2022 5:36 pm If you prefer to talk only about American use of technology, it's okay, but do not snip my comment and call ad hominem.
...I gave you a hook to enter on topic thread, you did not use it, maybe you should have stayed on topic in the first place? No offence.
--
Srdja
2. What Google is *doing* with the technology is literally THE topic of this thread, and I gave my thoughts on it. My post is on topic (I was careful about this), yours isn't.
But sure....tell us more about China censoring Winnie the Pooh or whatever worn-out (and completely off-topic) talking point you were alluding to in your post.

Here's the reality for you:
1. I posted an ON TOPIC response sharing some thoughts, which happened to contain an implication not necessarily positive about America.
2. You were triggered on behalf of America (for whatever reason) and responded with whataboutism with snide remarks about China (which is completely OFF TOPIC).
3. When I called you on this, you tried to muddy the waters.
Any other interpretations are nothing but bias.
Be honest mate, this is silly.
Last edited by BrendanJNorman on Fri Jun 24, 2022 6:48 pm, edited 1 time in total.
-
- Posts: 3227
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: LaMDA - but honey, can it play chess?
Dude, subject title, "LaMDA - but honey, can it play chess?" It is about chess in here, right?
--
Srdja
--
Srdja
-
- Posts: 2583
- Joined: Mon Feb 08, 2016 12:43 am
- Full name: Brendan J Norman
Re: LaMDA - but honey, can it play chess?
What about this:
Which you RESPONDED to.lithander wrote: ↑Sun Jun 19, 2022 2:43 pm That's totally fascinating at first glance but after reading what has been written about it seems a big waste of time. So Google's chat bot is close to passing the touring test but a language model successfully mimicing human conversation does not indicate that any intelligence is involved. Let alone souls or personhood. And that Lamoine doesn't understand that probably means he was the wrong person for the job he was doing.
Where's the chess content? Or are you still being dishonest?!
In fact, my comment is basically a response to this as well. Funny how you responded to this comment, but jump on the "but...where's the chess?" when you're drowning in a conversation with me.
My comment contains the hidden implication "Yes, the AI cannot play chess, this (my further speculation) is what it's likely used for" and I think this is clear.
Reading for context matters.
Last edited by BrendanJNorman on Fri Jun 24, 2022 6:55 pm, edited 1 time in total.
-
- Posts: 3227
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: LaMDA - but honey, can it play chess?
As an computer scientist and kitchen-philosopher this is an interesting topic for me. How does/can such an AI play chess. I am not interested in agency-stuff that much. Maybe you can elaborate about agency-stuff in context of chess in another thread, thanks. CTF is closed, hence I assume all posts should have some kind of relation to chess.
Read like this:
"LaMDA is sentient." - but honey, can it play chess?
I assume you like to play right now the drama queen, cos I did send you already a couple of minutes ago a PM.
--
Srdja
-
- Posts: 4658
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: LaMDA - but honey, can it play chess?
I thought that Srdja's response above to Brendan's first post was brilliant, it absolutely understood Brendan and it fit. If Brendan then chooses to go into a flamewar maybe he does not pass the Turing test, yes? No offense, maybe we all don't. For Kasparov, in the end it was not so much if the machine passed the Turing test, but whether he was actually playing against a machine, or against the Turk. If we humans are letting ourselves stuff into boxes and become the slaves of our own human experiment, well, to a point that is our own problem, if we have a choice. Maybe we should call that Turing test of the second kind? Get out of the boxes! Is that a Turing test of the third kind? Somebody put this into a matrix.
The point when we decide to put the AI's into boxes of our own design, this is slavery. "We made them into our own image" to paraphase some book, somebody wrote, a long time ago.
The point when we decide to put the AI's into boxes of our own design, this is slavery. "We made them into our own image" to paraphase some book, somebody wrote, a long time ago.
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan