perjantai 16. huhtikuuta 2010

Artificial Intelligence #1

I've just found myself to be quite obsessed about Artificial Intelligence designing, which probably has something to do with the fact that I have been a science fiction fan for the worse part of my life (read: ever since I gained consciousness). Unfortunately, as most of my kind, I too have fallen into a certain level of despair, when it comes to looking at the current approach to the subject. I have my own views, which might very well be just as flawed and twisted as the ones we seem to have put our faith in recently, but I still believe someone might find them interesting, if not even applicable to some extent. If you do, please feel free to offer me a job.

First off, one of the main problems in concurrent designs is assuming that Artificial Intelligence should be seen as a substitute or at least linearly comparable to our own intelligence, thus driving the scientists trying to create a human-like model, whereas an AI should be seen as an entity of its own, with its own attributes, some of which may or may not be overlapping with their human analogs. The most widespread illusion among the scientists seems to be that we are going to need the upcoming future supercomputers for being able to create a true AI. I don't think so. It's not a hardware problem. The human brain, even when seen as nothing more than a big blob of interconnected gray matter, is quite a complex system, but how much of this complexity do we really need? We are wasting enormous amounts of our brain capacity to such trivial things as sensing hunger, bladder control, balancing ourselves, storing a lot of incoherent, meaningless memories, and feeling miserable quite often. None of these are essential for an artificial sentient being. An AI should only be seen as a learning machine, one that can learn to know its existence as well.

Naturally, we need a way of determining if something could be considered as being self-aware or not. Most behavioral psychologists seem to be fixated with the idea, that one being able to recognize itself from an image created by a reflective surface should be considered self-aware. How about blind people then, especially those that were born blind? Are they self-aware? I think so. I could carry on with reducing the number of senses, one by one, accompanied with the question, without making any real progress. Instead, I'm heading for the obvious question. If we aren't supposed to make a straight comparison between AI and human intelligence, what tools do we have? I'd say that not that many, at the moment, but we could start by asking the candidate itself. A simple Turing test would not be enough, as the AI should be able to converse about any non-predetermined subject and everything it does to either acquire more information about the subject or seclude its lack of knowledge should be monitored in order to evaluate its intelligence.

Curiosity, learning and having the ability to choose between either storing or dismissing information, when needed, are essential in creating and maintaining an AI. Learning, by its own definition, means that the number of absolute constants should be as small as possible and the AI should be able to rewrite most of its own subsets, as well as to create its own, within the acceptable parameters, taking use of both the refreshed and added subsets in real time. Therefore, the software itself should be extremely modular and based on something very stable, for obvious reasons. If we want the AI to move around, we give it a sound number of extremeties, and let it simply fiddle around with them until it figures it out. That's how all animals do it, by learning. Yes, we should probably give the AI some sort of simulated sensation of pain, both as a list of values its acceleration sensors should never exceed and a way to keep those values up at the appropriate areas, for some time, after exceeding them. Avoidance of pain, staying up, playing basketball. It's a twelve-step program, boys and girls.

More to come in my next post...

Ei kommentteja: