towforce wrote: ↑Tue Oct 01, 2024 7:53 pm
[...]
Regarding your point (5): a lot of what LLMs do is stochastic parroting, but it's also easy to show that they build, via language, sophisticated models of the world. Were this not so, they wouldn't be able to intelligently answer a wide range of prompts that aren't in their training material, and they wouldn't be able to play chess.
[...]
Well, this is something to ponder on. The stochastic parrot fraction is in concord with the Chinese room argument, but when there is an object as mental model, is there a subject too? And is there an subject<->object relation? Robo-philoshopy
towforce wrote: ↑Tue Oct 01, 2024 7:53 pm
[...]
Regarding your point (5): a lot of what LLMs do is stochastic parroting, but it's also easy to show that they build, via language, sophisticated models of the world. Were this not so, they wouldn't be able to intelligently answer a wide range of prompts that aren't in their training material, and they wouldn't be able to play chess.
[...]
Well, this is something to ponder on. The stochastic parrot fraction is in concord with the Chinese room argument, but when there is an object as mental model, is there a subject too? And is there an subject<->object relation? Robo-philoshopy
A chatbot which contains models of the world is still a Chinese Room: spreadsheets often contain models (e.g. a model of a company's finances), but the spreadsheet itself knows nothing: it's just doing calculations (Gemini Advanced can actually discuss your Google Sheets spreadsheets with you, of course, but that's a chatbot, and hence 100% Chinese room IMO).
Chess engines are 100% Chinese room, even though we often (thoughtlessly!) attribute intelligence and understanding to them.
Given that, as far as we know, a biological brain is just a bunch of nerve cells, where is consciousness coming from, then? That's a "hard problem"!
towforce wrote: ↑Tue Oct 01, 2024 11:49 pm
A chatbot which contains models of the world is still a Chinese Room: spreadsheets often contain models (e.g. a model of a company's finances), but the spreadsheet itself knows nothing: it's just doing calculations (Gemini Advanced can actually discuss your Google Sheets spreadsheets with you, of course, but that's a chatbot, and hence 100% Chinese room IMO).
Chess engines are 100% Chinese room, even though we often (thoughtlessly!) attribute intelligence and understanding to them.
Given that, as far as we know, a biological brain is just a bunch of nerve cells, where is consciousness coming from, then? That's a "hard problem"!
Now we are off topic, let me just say that you are contradicting yourself. You claim 100% Chinese room for the machines, then ask the "hard problem question", where does consciousness in our biological brains come from
towforce wrote: ↑Tue Oct 01, 2024 11:49 pm
A chatbot which contains models of the world is still a Chinese Room: spreadsheets often contain models (e.g. a model of a company's finances), but the spreadsheet itself knows nothing: it's just doing calculations (Gemini Advanced can actually discuss your Google Sheets spreadsheets with you, of course, but that's a chatbot, and hence 100% Chinese room IMO).
Chess engines are 100% Chinese room, even though we often (thoughtlessly!) attribute intelligence and understanding to them.
Given that, as far as we know, a biological brain is just a bunch of nerve cells, where is consciousness coming from, then? That's a "hard problem"!
Now we are off topic, let me just say that you are contradicting yourself. You claim 100% Chinese room for the machines, then ask the "hard problem question", where does consciousness in our biological brains come from
--
Srdja
I agree: we'd all agree that mechanical calculators have no idea what they're doing, and for me, the same applies to chatbots. But if we start discussing the "hard problem" (the title of the WIki article I previously linked), it's a 100% certainty that we'll go off topic. Personally, I don't really have anything useful to say about the "hard problem" anyway (apart from the opinion that chatbots don't have consciousness).
The simple reveals itself after the complex has been exhausted.
If anyone wants to discuss this, the moderators would probably allow a thread discussing when chess engines (or even chatbots) will become conscious: don't be inhibited about starting such a thread!
The simple reveals itself after the complex has been exhausted.
Nvidia has launched an LLM which is said to leapfrog the competition straight to the top. However, the video below, while making a good introduction, doesn't really show anything that enables comparison (except that, unlike most LLMs, it can correctly tell you how many Rs there are in the word strawberry - a test that LLMs fail because they receive your prompt in tokenised format, and don't have access to the original text). There's a growing need for a good LLM comparison chart!
The simple reveals itself after the complex has been exhausted.
towforce wrote: ↑Fri Oct 18, 2024 2:56 pm
Nvidia has launched an LLM which is said to leapfrog the competition straight to the top. However, the video below, while making a good introduction, doesn't really show anything that enables comparison (except that, unlike most LLMs, it can correctly tell you how many Rs there are in the word strawberry - a test that LLMs fail because they receive your prompt in tokenised format, and don't have access to the original text). There's a growing need for a good LLM comparison chart!
Thanks for this, I'll take a good look later. Can you clarify if the LLM runs locally like their previous models or on their huge servers which is probably the case here?
Werewolf wrote: ↑Fri Oct 18, 2024 3:50 pmThanks for this, I'll take a good look later. Can you clarify if the LLM runs locally like their previous models or on their huge servers which is probably the case here?
The 70B version is available on HuggingChat - link.
Full list of models available on HuggingChat - link.