A basic question: what are the "requirements" for chatbots?
Originally, they were supposed to mimic human conversation -
link. The famous "Turing Test" was originally titled, "The Imitation Game" (
link), and the test was to be as human-like as possible, not to be as intelligent as possible.
However, this is no longer the case: if you go for a drink with a friend, you don't usually ask him to write a block of code for you, to write a legal document for you, or to explain how to do your school homework in detail.
Thinking about this, the "use case" is clear: there are (or soon will be) a billion people using chatbots, and their job is to answer each of those different people's prompts as best they can. Astonishingly, they're already "reasonably good" at this, and they're improving rapidly.
Earlier in this thread, I was a bit harsh, and said that a chatbot shouldn't be expected to play chess: however - if that's what the user wants, then that's what the chatbot should do: it should make a chess board appear on the screen, and start playing.
If you want it to play at GM level, it should do this is as well: however - it's needn't have the skill to do that itself: it should, however, know where there's an API available from which it can get good moves. Likewise, if the user wants a numerical solution to a maths problem that requires a huge amount of calculation, it should be able to call up that resource as well.
To some extent, this is already happening: ChatGPT can use other systems to draw pictures for example.
So the long term future of chatbots is: ask for absolutely any kind of response, and get a good quality response that answers that request.
Those of us using them know that they're advancing VERY quickly right now. The prediction that their progress will follow an "S curve", and get close to "as good as they're ever going to be" in about 20-30 years, seems reasonable.
The simple reveals itself after the complex has been exhausted.