RP. I can give examples of non-computational things, but it’s not obvious why they are non-computational. The one I like the best is the tiling problem. You have these shapes made out of squares glued together, things called polyominoes. You’re given a collection of these, a finite number of different sorts but an unlimited number of each sort, and you are asked: can you use these shapes to cover the entire plane without gaps or overlaps?
That is an example of a non-computational problem. That is to say, there is no computer programme which will answer yes or no for any given set of tiles: no programme where you can feed in the information of the tiles into the computer and ask it, will they tile the plane or not. Although the answer "no" can be computational, the answer "yes" is not computational. That is to say there is no way of being sure, for an arbitrary set of polyomino shapes, that they will tile the plane. It’s quite a subtle piece of mathematics to show that this is a non-computational problem. There is no computer programme whatsoever that can make this decision for any possible given set of tiles.
CC. Okay, so there are some non-computational problems. Now your argument is that there are some of these problems which cannot be solved computationally, but which can be solved by bringing in human consciousness. How does this argument go?
RP. You have to phrase the problem in the right way. In the case of the tiling problem, the way it works is like this. Suppose you were given a computer programme which would answer correctly "yes" or "no" to a set of tiles, but sometimes it will say not come to any conclusion at all (you could certainly have computers like that). Then by knowing that the computer doesn’t give you the wrong answer, from the computer’s construction you could build a set of tiles which the computer will get stuck on: a set of tiles which you know will tile the plane, and which you also know will not be able to be answered correctly by this computer.
CC. But when you say "you know" what is it about me as a conscious human being that enables me to know, what particular faculty is it that I’m applying?
RP. It’s the understanding really. It’s basically Gödel’s theorem, but you have to know that the computer doesn’t give the wrong answers. Gödel’s theorem is telling you, if you like, that the procedures we’re prepared to accept as proof cannot ever be limited to specific computational procedures: they’re never computationally limited because once you can phrase the rules of the computational system then you can see how to transcend them. So provided that you trust those rules, so that you’re prepared to count following the rules as constituting a proof, then you can see how to get methods of proof which are outside it. Our mathematical understanding, or mathematical intuition as Gödel would put it, is something outside computation.
CC. Is the open-endedness of this procedure crucial here? I take Gödel’s theorem as producing some proposition that you can’t prove, but always with a bigger system in which you can prove it. But you don’t just stay with the bigger system - isn’t the crucial point here that you can always keep going out to yet another proposition that you can’t prove?
RP. It is like that, that’s right. And in fact in our understanding we’re using this kind of procedure all the time; it’s not limited to sophisticated mathematical logic. Imagine you have some procedure at which you work away, and you think you’ve got the rules right; and then you get worried and think maybe it’s not doing everything … so you step back and you look at what it is that you’ve put into that system, and you think about what are the implications of the kinds of rules you’ve been using. This sort of reasoning ¾ stepping back from the system ¾ is doing the same thing as Gödel’s theorem, and you always have to bring your awareness in to do that.
CC. So the insight you’re talking about is self-reflexive: looking at your own thinking in order to enlarge it ?
RP. I think that’s essentially right; that’s basically what you’re doing.
An interview with Professor Roger Penrose - Published in Network,May 2000 Interviewer: Chris Clarke
I think Penrose has made an important discovery here that in order for machines to have the capability of strong AI surpassing human intelligence they have to be conscious beings first because without awareness it is virtually impossible to analyse self-referential statements (emphasizing bold words from RP) and therefore machines without this fundamental quality in them would never be able to transcend Gödel's statements and therefore forever remain below the levels of human intelligence. It is our nature of Self awareness which helps us in our mathematical understanding and allows us to transcend Gödel's statements.
But Self awareness or the procedures of mathematical understanding itself might be something which you cannot simulate it on a computer i.e. human thought process or awareness itself might be non-computable (Penrose proves this in great lengths in his Shadows of the Mind) which in such a case the proponents of Strong AI are doomed because without conscious machines there can be no Strong AI but without a non-computable machine simulating human thought process there can be no conscious activity.
No comments :
Post a Comment