I was of course being sarcastic with that definition.bhlangonijr wrote:I completely disagree and I think "that which computers can't do well yet" is a very lousy definition for AI. I'll quote something which IMO captures better what AI should be: "..where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success." - John MacCarth. There must be some sort of "conscience" within the AI agent so that it can dynamically change its behaviour to better adapt to different situations. That's exactly what the current approaches to implement a top chess engine are NOT considering. As Miguel nicely put it is overfitted. Computers do a lot of things very well and it doesn't mean they have intelligent behaviour because of that.rbarreira wrote: It is a successful AI agent in the sense that it plays chess really well (defining "playing chess well" as playing a full game with normal chess rules, which would hardly end up in a 9 knights, 3 queens position with good play from the AI agent).
But then again, for many people AI is always defined as "that which computers can't do well yet", in which case no computer program can ever be successful at AI (moving goalposts and all that).
However the definition you cited there is vague and useless as well. Who defines success, and who can prove or disprove that any actions taken by an AI "maximize the chance of success" except in very narrow domains which are probably going to be dissed as "not requiring intelligence" anyway?
I still say people's idea of AI is based on moving goalposts all the time. A few decades ago many people believed beating a chess GM required true intelligence and that the "symbolic manipulation" done by chess-playing programs could never result in intelligent behavior such as playing a full game of chess well. Nowadays that is of course disproven, so instead we have people saying that the chess program must adapt to ANY chess position and not just be able to play a game well. Do you see the moving goalposts that is done to keep AI always at the horizon?
Regarding the notion of "being overfitted", put a human in a 10-dimensional environment and see if he can maximize any chances of success... Does this mean humans are overfitted and not intelligent?
Just one more thing: I prefer to see intelligence as a continuum rather than a "black or white" thing. Computers are more intelligent today than 20 years ago. They can do more things now than before, many of which were earlier thought to require a more general intelligence and conscience such as humans have. They will be more intelligent tomorrow, and after some time they will be more intelligent than humans (unless of course we merge AI with our brains).