/publication/42

Publisher: Bantam Books (1982)

Aaron Sloman's book T he Computer Revolution in Philosophy


Mind is a pattern perceived by a mind.


The point has been to suggest to the reader the potential delicacy, intricacy, and self-involvedness of a system that responds to external stimuli and to features at various levels of its own internal configuration. It is well-nigh impossible to disentangle such a system's response to the outside world from its own self-involved response, for the tiniest external perturbation will trigger a myriad tiny interconnected events, and a cascade will ensue. If you think of this as the system's 'perception' of input, then clearly its own state is also 'perceived' in a similar way. Self-perception cannot be disentangled from perception.


In brief, then, a representational system is built on categories; it sifts incoming data into those categories, when necessary refining or enlarging its network of internal categories; its representations or 'symbols' interact among themselves according to their own internal logic; this logic, although it runs without ever consulting the external world, nevertheless creates a faithful enough model of the way the world works that it manages to keep the symbols pretty much 'in phase' with the world they are supposed to be mirroring. A television is thus not a representational system, as it indiscriminately throws dots onto its screen without regard to what kinds of things they represent, and the patterns on the screen do not have autonomy-they are just passive copies of things 'out there.' By contrast, a computer program that can 'look' at a scene and tell you what is in that scene comes closer to being a representational system. The most advanced artificial intelligence work on computer vision hasn't yet cracked that nut. A program that could look at a scene and tell you not only what kinds of things are in the scene, but also what probably caused that scene and what will probably ensue in it-that is what we mean by a representational system. In this sense, is a country a representational system? Does a country have a symbol level? We'll leave this one for you to ponder on.


That's the way it is, with conscious systems. They perceive themselves on the symbol level only, and have no awareness of the lower levels, such as the signal levels.


I prefer to give teams of a sufficiently high level the name of 'symbols.' Mind you, this sense of the word has some significant differences from the usual sense. My 'symbols' are active subsystems of a complex system, and they are composed of lower-level active subsystems.... They are therefore quite different from passive symbols, external to the system, such as letters of the alphabet or musical notes, which sit there immobile waiting for an active system to process them.


ANTEATER: Ant colonies have been subjected to the rigors of evolution for billions of years. A few mechanisms were selected for, and most were selected against. The end result was a set of mechanisms which make ant colonies work as we have been describing. If you could watch the whole process in a movie-running a billion or so times faster than life, of course-the emergence of various mechanisms would be seen as natural responses to external pressures, just as bubbles in boiling water are natural responses to an external heat source. I don't suppose you see 'meaning' and 'purpose' in the bubbles in boiling water-or do you? Prelude . . . Ant Fugue 174 CRAB: No, but -- ANTEATER: Now that's my point. No matter how big a bubble is, it owes its existence to processes on the molecular level, and you can forget about any 'higher-level laws.' The same goes for ant colonies and their teams. By looking at things from the vast perspective of evolution, you can drain the whole colony of meaning and purpose. They become superfluous notions.


The fundamental principle involved is called negative feedback, of which there are various different forms. In general what happens is this. The 'purpose machine,' the machine or thing that behaves as if it had a conscious purpose, is equipped with some kind of measuring device which measures the discrepancy between the current state of things and the 'desired' state. It is built in such a way that the larger this discrepancy is, the harder the machine works. In this way the machine will automatically tend to reduce the discrepancy-this is why it is called negative feedback-and it may actually come to rest if the 'desired' state is reached.


It is no good taking the right number of atoms and shaking them together with some external energy till they happen to fall into the right pattern, and out drops Adam! You may make a molecule consisting of a few dozen atoms like that, but a man consists of over a thousand million million million million atoms. To try to make a man, you would have to work at your biochemical cocktail-shaker for a period so long that the entire age of the universe would seem like an eye-blink, and even then you would not succeed. This is where Darwin's theory, in its most general form, comes to the rescue. Darwin's theory takes over from where the story of the slow building up of molecules leaves off.


It is no good taking the right number of atoms and shaking them together with some external energy till they happen to fall into the right pattern, and out drops Adam! You may make a molecule consisting of a few dozen atoms like that, but a man consists of over a thousand million million million million atoms. To try to make a man, you would have to work at your biochemical cocktail-shaker for a period so long that the entire age of the universe would seem like an eye-blink, and even then you would not succeed. This is where Darwin's theory, in its most general form, comes to the rescue. Darwin's theory takes over from where the story of the slow building up of molecules leaves off.


The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.


The physicist Paul Davies, writing on just this topic in his recent book Other Worlds, says: “our consciousness weaves a route at random along the ever-branching evolutionary pathway of the cosmos, so it is we, rather than God, who are playing dice.”


The physicist Paul Davies, writing on just this topic in his recent book Other Worlds, says: “our consciousness weaves a route at random along the ever-branching evolutionary pathway of the cosmos, so it is we, rather than God, who are playing dice.”


there is a world -- a branch of the universal wave function – in which you didn’t make that stupid mistake you now regret so much. Aren’t you jealous? But how can you be jealous of your self? Besides which, there’s another world in which you made yet stupider mistakes, and are jealous of this very you, here and now in this world!


We are now in a position to integrate the perspectives of three large fields: psychology, biology and physics. By combining the positions of Sagan, Crick, and Wigner as spokesmen for the various outlooks, we get a picture of the whole that is quite unexpected. First, the human mind, including consciousness and reflective thought, can be explained by activities of the central nervous system, which, in turn, can be reduced to the biological structure and function of that physiological system. Second, biological phenomena at all levels, can be totally understood in terms of atomic physics, that is, through the action and interaction of the component atoms of carbon, nitrogen, oxygen, and so forth. Third, and last, atomic physics, which is now understood most fully by means of quantum mechanics, must be formulated with the mind as a primitive component of the system. We have thus, in separate Rediscovering the Mind 40 Circle – from the mind, back to the mind. The results of this chain of reasoning will probably lead more aid and comfort to Eastern mystics than to neurophysiologists and molecular biologists; nevertheless, the closed loop follows from a straightforward combination of the explanatory processes of recognized experts in the three separate sciences. Since individuals seldom work with more than one of these paradigms, the general problem has received little attention. If we reject this epistemological circularity, we are left with two opposing camps: a physics with a claim to completeness because it describes all of nature, and a psychology that is all-embracing because it deals with the mind, our only source of knowledge of the world. Given the problems in both of these views, it is perhaps well to return to the circle and give it more sympathetic consideration. If it deprives us of firm absolutes, at least it encompasses the mind-body problem and provides a framework within which individual disciplines can communicate. The closing of the circle provides the best possible approach for psychological


The views of a large number of contemporary physical scientists are summed up in the essay “Remarks on the Mind-Body Question” written by Nobel laureate Eugene Wigner. Wigner begins by pointing out that most physical scientists have returned to the recognition that thought – meaning the mind – is primary. He goes on to state: “It was not possible to formulate the laws of quantum physics in a fully consistent way without reference to the consciousness.” And he concludes by noting how remarkable it is that the scientific study of the world led to the content of consciousness as an ultimate reality.


The problem faced by quantum theorists can best be seen in the famous paradox. “Who killed Schrödinger’s cat?” In a hypothetical formulation, a kitten is put in a closed box with a jar of poison and a triphammer poised to smash the jar. The hammer is activated by a counter that records random events, such as radioactive decay. The experiment lasts just long enough for there to be a probability of one-half that the hammer will be released. Quantum mechanics represents the system mathematically by the sum of a live-cat and a dead-cat function, each with a probability of one-half. The question is whether the act of looking (the measurement) kills or saves the cat, since before the experimenter looks in the box both solutions are equally likely. This lighthearted example reflects a deep conceptual difficulty. In more formal terms, a complex system can only be described by using a probability distribution that relates the possible outcomes of an experiment. In order to decide among the various alternatives, a measurement is required. This measurement is what constitutes an event, as distinguished from the probability which is a mathematical abstraction. However, the only simple and consistent description physicists were able to assign to a measurement involved an observer’s becoming aware of the result. Thus the physical event and the content of the human mind were inseparable.


Werner Heisenberg, one of the founders of the new physics, became deeply involved in the issues of philosophy and humanism. In Philosophical Problems of Quantum Physics, he wrote of physicists having to renounce thoughts of an objective time scale common to all observers, and of events in time and space that are independent of our ability to observe them. Heisenberg stressed that the laws of nature are no longer dealt with elementary particles, but with our knowledge of these particles – that is, with the contents of our minds. Erwin Schrödinger, the man who formulated the fundamental equation of quantum mathematics, wrote an extraordinary little book in 1958 called Mind and Matter. In this series of essays, he moved from the results of the new physics to a rather mystical view of the universe that he identified with the “perennial philosophy” of Aldous Huxley. Schrödinger was the first of the quantum theoreticians to express sympathy with the Upanishads and eastern philosophical thought. A growing body of literature now embodies this perspective, including two popular works, The Tao of Physics by Fritjof Capra and the Dancing Wu Li masters by Gary


Toward the close of the last century, physics presented a very ordered picture of the world, in which events unfolded in characteristic, regular ways, following Newton’s equations in mechanics and Maxwell’s in electricity. These processes moved inexorably, independent of the scientist, who was simply a spectator. Many physicists considered their subject as essentially complete. Starting with the introduction of the theory of relativity by Albert Einstein in 1905, this neat picture was unceremoniously upset. The new theory postulated that observers in different systems moving with respect to each other, would perceive the world differently. The observer thus became involved in establishing physical reality.


The way I see it, consciousness has got to come from a precise pattern of organization – one that we haven’t yet figured out how to describe in any detailed way. But I believe we will gradually come to understand it. In my view consciousness requires a certain way of mirroring the external universe internally, and the ability to respond to that external reality on the basis of the internally represented model. And then in addition, what’s really crucial for a conscious machine is that it should incorporate a well-developed and flexible self-model.


There seems to be no alternative to accepting some sort of incomprehensible quality in existence. Take your pick. We all fluctuate delicately between a subjective and objective view of the world, and this quandary is central to human nature.


This ability to snap oneself onto others seems to be the exclusive property of members of higher species. (it is the central topic of Thomas Nagel’s article, “What is it like to be a Bat?” reprinted in selection 24.) One begins by making partial mappings: “I have feet, you have feet; I have hands, you have hands; hmm . . “ These partial mappings then can induce a total mapping. Pretty soon, I conclude from your having a head that I to have one, although I can’t see mine. But this stepping outside myself is a gigantic and, in some ways, self-denying step. It contradicts much direct knowledge about myself. It is like Harding’s two distinct types of verb “to see” – when applied to myself it is quite another thing than when it applies to you. The power of this distinction gets overcome, however, by the sheer weight of too many mappings all the time, establishing without doubt my membership in a class that I formulated originally without regard to myself. So logic overrides intuition. Just as we could come to believe that our Earth can be round – as is the alien moon – without people falling off, so we finally come to believe that the solipsistic view is nutty. Only a powerful vision such as Harding’s Himalayan experience can return us to that primordial sense of self and otherness, which is at the root of the problems of conscious ness, soul, and self.


As a child I formulated the abstraction “human being” by seeing things outside of me that had something in common – appearance, behaviour and so on. That this particular class could then “fold back” on me and engulf me – this realization necessarily comes at a later stage of cognitive development, and must be quite a shocking experience, although probably most of us do not remember it happening. The truly amazing step, though, is the conjunction of the two premises. By the time we’ve developed the mental power to formulate On Having No Head 32 Them both, we also have developed a respect for the compelling of simple logic. But the sudden conjunction of these two premises slaps us in the face unexpectedly. It is an ugly, brutal blow that sends us reeling – probably for days, weeks, months. Actually, for years – for our whole lives! But somehow we suppress the conflict and turn it in other directions.


Consider, for example, the striking discovery by the psycholinguists James Lackner and Merril Garrett of what might be called an unconscious channel of sentence comprehension. In dichotic listening tests, subjects listen through earphones to two different channels and are instructed to attend to just one channel. Typically they can paraphrase or report with accuracy what they have heard through the attended channel but usually they can say little about what was going on concomitantly in the unattended channel. Thus, if the unattended channel carries a spoken sentence, the subjects typically can report they heard a voice, or even a male or female voice. Perhaps they even have a conviction about whether the voice was speaking in their native tongue, but they cannot report what was said. In Lackney and Garrett’s experiments subjects heard ambiguous sentences in the attended channel, such as “He put out the lantern to signal the attack.” Simultaneously, in the unattended channel one group of subjects received a sentence that suggested the interpretation of the sentence in the attended channel (e.g. “He extinguished the lantern), while another group had a neutral or irrelevant sentence as input. The former group could not report what was presented through the unattended channel, but they favoured the suggested reading of the ambiguous sentences significantly more than the control group did.


The new way of thinking was supported by a crutch, one could cling to at least a pale version of the Lockean creed by imagining that these “unconscious” thoughts, desires, and schemes belonged to other selves within the psyche. Just as I can keep my schemes secret from you, my id can keep secrets from my ego. By splitting the subject into many subjects, one could preserve the axiom that every mental state must be someone’s conscious mental state and explain the inaccessibility of some of these states to their putative owners by postulating other interior owners for them. This move was usefully obscured in the mists of jargon so that the weird question of whether it was like anything to be a superego, for instance, could be kept at bay.


We have come to accept without the slightest twinge of incomprehension a host of claims to the effect that sophisticated hypothesis testing, memory searching, inference – in short, information processing – occurs within us though it is entirely inaccessible to introspection . It is not repressed unconscious activity of the sort Freud uncovered, activity driven out of the sight of consciousness, but just mental activity that is somehow beneath or beyond the ken of consciousness altogether. Freud claimed that his theories and clinical observations gave him the authority to overrule the sincere denials of his patients about what was going on in their minds. Similarly the cognitive psychologist marshals experimental evidence, models, and theories to show that people are engaged in surprisingly sophisticated reasoning processes of which they can give no introspective account at all. Not only are minds accessible to outsiders, some mental activities are more accessible to outsiders than to the very “owners” of those minds.


If a cleverly designed robot could (seem to) tell us of its inner life, (could utter all the appropriate noises in the appropriate contexts), would we be right to admit it to the charmed circle? We might be, but how could we ever tell we were not being fooled? Here the question seems to be; is that special inner light really turned on, or is there nothing but darkness inside? And this question looks unanswerable. So perhaps we have taken a misstep already.


Creatures react appropriately to events within the scope of their senses; they recognize things, avoid painful experiences, learn, plan, and solve problems. They exhibit intelligence. But putting matter this way might be held to prejudge the issue. Talking of their “senses” or of “painful” circumstances, for instance suggests that we have already settled the issue of consciousness -- for note that had we described a robot in those terms, the polemical intent of the choice of words would have been obvious (and resisted by many). How do creatures differ from robots, real or imagined? By being organically and biologically similar to us – and we are the paradigmatic conscious creatures.


From the inside, our Own consciousness seems obvious and pervasive, we know that much goes on around us and even inside our bodies of which we are entirely unaware or unconscious, but nothing could be more intimately know to us than those things of which we are, individually, conscious. Those things of which I am conscious, and the ways in which I am conscious of them, determine what it is like to be me. I know in a way no other could know what it is like to be me. From the inside, consciousness seems to be an all-or-nothing phenomenon – an inner light that is either on or off. We grant that we are sometimes drowsy or inattentive, or asleep, and on occasion we even enjoy abnormally heightened consciousness, but when we are conscious, that we are conscious is not a fact that admits of degrees. There is a perspective, then, from which consciousness seems to be a feature that sunders the universe into two strikingly different kinds of things, those that have it and those that don’t. Those that have it are subjects, beings to whom things can be one way or another, beings it is like something to be. It is not like anything at all to be a brick or a pocket calculator or an apple. These things have insides, but not the right sort of insides – no inner life, no point of view. It is certainly like something to be me (Something I know “from the inside”) and almost certainly like something to be you (for you have told me, most convincingly, that it is the same with you), and probably like something to be a dog or a dolphin (if only they could tell us!) and maybe even like something to be a spider.


Our ordinary concept of consciousness seems to be anchored to two separable sets of considerations that can be captured roughly by the phrases “from the inside” and “from the outside.”


The Tale of the Three Story Telling Machines, from The Cyberiad by Stanislaw Lem, translated by Michael Kandel.


Two sages were standing on a bridge over a stream. One said to the other, “I wish I were a fish. They are so happy!” The second replied, “How do you know whether fish are happy or not?” You’re not a fish.” The first said, “But you’re not me, so how do you know whether I know how fish feel?”