I ask that question because it represents a fascinating dichotomy that has taken place within the world of AI research. And it also serves to shine some light on a guy I can't help but have a great deal of respect for: Doug Lenat
You probably don't remember, or were never aware of the name, but I sure do. I remember when his work first started out, and was showing some promise (decades ago). He wanted, no less, than to teach the common sense that it has taken humanity thousands of years to accumulate. And so, obviously, it was a mammoth task, and it just blows my mind that this guy took it on with not only this level of tenacity, but with an equal assurance of its ultimate efficacy.
But his approach isn't very popular in today's world of "Deep Learning" and the use virtual neural networks. In that world you need only show the system what you want it to be able to recognize and it will create the pattern of weighted, neural triggering that will perform the desired recognition.
All very impressive of course, especially if you can get it to recognize complex spatial relationships like the pieces on a Go game board, but there is an underlying problem with this nevertheless. In such an approach an already knowledgeable interlocutor must tell the system this pattern, and not some other, is the one that ought to be learned. As well as to provide no framework whatsoever as to why one pattern matters more than another.
Juxtapose that with the idea that, if you had been taught a sufficient set of life rules, even if you had limited faculties of mental processing, you could be expected to have a decent chance of being able to deduce, if not what the next thing that ought to be know should be, at least what you should do next ought to be. Or that, once something complex was recognized, you could deduce what might be expected of it.
I use the term "experience association" a lot in talking about consciousness, learning, and meaning. In a sense you could say that common sense is our legacy of experience association, or at least a good part of it. I always intended that the term go deeper than just rules per se certainly, because we are a very complex system of collecting a wide range of sensory input, along with simultaneous effects of lower brain chemistry to go along with all of that external data. The mix becomes (all puns intended) a heady brew of viscerally felt understanding; with so much of it quite beyond direct cognition.
And if that weren't enough to really complicate things we also introduce the whole process of objectification itself; the separation of inner, from outer; the establishment of a self, with a specific point of view, and the concept of language. Whereupon abstraction comes into the picture and, suddenly, we can contemplate actions, and consequences, backwards, and forwards in time. And we can also start banking relationships found out through that school of hard knocks, through various forms of linear description.
Nature used neural networks for pattern recognition for a reason of course; it works. But that, by itself would never have been enough; not if you truly want to end up with actual meaning processors who could dream; who could do intuitive leaps, and yet still not stick their fingers in the fan, or put sensitive body parts on very hot things. All trivial seeming sorts of things to avoid until you realize that obviously bad outcomes aren't so easy to see several abstracted layers away, in steps that soon become forced upon you by the contextual logic of the situation.
Perhaps the bottom line here is that both types of approach are needed for an AI we could actually use, and be able to live with, because, and I can't stress this too much, we'd want these entities capable of understanding the same social, and moral constraints that we do. After all, one of the most basic of common sense rules is: do unto others as you would have them do unto you. Which is also, in a sense, the height of enlightened self interest.
No comments:
Post a Comment