...Another aspect to the tradeoffs of inherent in "layers of abstraction." This is what the linked article here from Nautilus makes me think of.
Not being able to understand how various sorts of deep learning, or neural network, systems get from input to desired output is an interesting parallel to our own inability to fully understand not only why we do the things we do, but what all that wetware is doing to allow us to "think" in the first place.
I have always been suspicious of efforts to create true "AI," even if it might be possible. The fact that you might then create self aware entities, gifted, or burdened, depending on your point of view, with point of view, and thus the singular notion of identity, from which ego might arise, without the same aspects of fear, and chemical/emotional response that we inherited, via lower brain functionality, from our genetic ancestors, presents questions of behavioral possibilities that one could speculate endlessly on. And that doesn't even begin to consider the other process we go through, as we progress from infant to adult, of extensive experience association, making complex connections between inner feelings with outer stimuli; all while in the necessary care of other sentients who have already made their own such connections, and pass on subtle influences into the mix. We generally think of this as learning, but it is a great deal more than that as it literally structures a tremendously flexible neural system into the specific patterns to handle the physical, language and cultural matrix we are born into.
It has always seemed to me that, whether true, "sentient," AI is possible or not, it is absolutely not something we should be pursuing now; especially with our own ignorance of ourselves. What is reasonable, as well as useful, would be context specific reasoning engines. Procedurally based contingency processing systems that could help us de emphasize the need for skill specialization so that we can rid ourselves of an operating system that has taken specialization to an absurd extreme; something that should be self evident when you consider the possibility that it is the insane need of competitive advantage that is a significant part of what is driving the push for sentient "AI" in the first place (there is also, of course this insane idea that such new "sentience" is the natural, next step in our evolution, which to me is utter nonsense. Especially when you consider that we haven't yet even begun to explore what human ability might extend to if it were allowed to develop in an environment of truly thoughtful, loving structure; as opposed to the fear based economics of scarcity we have now).
Is Artificial Intelligence Permanently Inscrutable?
Despite new biology-like tools, some insist interpretation is impossible.
No comments:
Post a Comment