THE MIND-BODY PROBLEM

Bình luận · 423 Lượt xem

The mind-body problem is best thought of not as a single problem but as a set of problems that attach to different views of the mind. For physicalists, the mind-body problem is the problem of explaining how conscious experience can be nothing other than a brain activity—what has been cal

Representation and the mind

One aspect of mind that needs explaining is how the mind is able to represent things. Consider the fact that I can think about all kinds of different things— about this textbook I am trying to write, about how I would like some Indian food for lunch, about my dog Charlie, about how I wish I were running in the mountains right now. Medieval philosophers referred to the mind as having intentionality—the curious property of “aboutness”—that is, the property of an object to be able to be about some other object. In a certain sense, the mind seems to function kind of like a mirror does—it reflects things other than itself. But unlike a mirror, whose reflected images are not inherently meaningful, minds seem to have what contemporary philosopher John Searle calls “original intentionality.” In contrast, the mirror has only “derived intentionality”—its image is meaningful only because something else gives it meaning or sees it as meaningful. Another thing that has derived intentionality is words, for example the word “tree.” “Tree” refers to trees, of course, but it is not as if the physical marks on a page inherently refer to trees. Rather, human beings who speak English use the word “tree” to refer to trees. Spanish speakers use the word “arbol” to refer to trees. But in neither case do those physical marks on the page (or sound waves in the air, in the case of spoken words) inherently mean anything. Rather, those physical phenomena are only meaningful because a human mind is representing those physical phenomena as meaningful. Thus, words are only meaningful because a human mind represents them in a meaningful way. Although we speak of the word itself as carrying meaning, this meaning has only derived intentionality. In contrast, the human mind has original intentionality because only the mind is the ultimate creator of meaningful representations. We can explain the meaningfulness of words in terms of thoughts, but then how do we explain the meaningfulness of the thoughts themselves? This is what philosophers are trying to explain when they investigate the representational aspect of mind.

There are many different attempts to explain what mental representation is but we will only cursorily consider some fairly rudimentary ideas as a way of building up to a famous thought experiment that challenges a whole range of physicalist accounts of mental representation. Let’s start with a fairly simple, straightforward idea—that of mental images. Perhaps what my mind does when it represents my dog Charlie is that it creates a mental image of Charlie. This account seems to fit our first person experience, at least in certain cases, since many people would describe their thoughts in terms of images in their mind. But whatever a mental image is, it cannot be like a physical image because physical images require interpretation in terms of something else. When I’m representing my dog Charlie it can’t be that my thoughts about Charlie just are some kind of image or picture of Charlie in my head because that picture would require a mind to interpret it! But if the image is suppose to represent the thing that has “original intentionality,” then if our explanation requires some other thing that has that has original intentionality in order to interpret it, then the mental image isn’t really the thing that has original intentionality. Rather, the thing interpreting the image would have original intentionality. There’s a potential problem that looms here and threatens to drive the mental image view of mental representation into incoherence: the object in the world is represented by a mental image but that mental image itself requires interpretation in terms of something else. It would be problematic for the mental image proponent to then say that there is some other inner “understander” that interprets the mental image. For how does this inner understander understand? By virtue of another mental image in its “head”? Such a view would create what philosophers call an infinite regress: a series of explanations that require further explanations, thus, ultimately explaining nothing. The philosopher Daniel Dennett sees explanations of this sort as committing what he calls “the homuncular fallacy,” after the Latin term, homunculus, which means “little man.” The problem is that if we explain the nature of the mind by, in essence, positing another inner mind, then we haven’t really explained anything. For that inner mind itself needs to be explained. It should be obvious why positing a further inner mind inside the first inner mind enters us into an infinite regress and why this is fatal to any successful explanation of the phenomenon in question—mental representation or intentionality.

Within the cognitive sciences, one popular way of understanding the nature of human thought is to see the mind as something like a computer. A computer is a device that takes certain inputs (representations) and transforms those inputs in accordance with certain rules (the program) and then produces a certain output (behavior). The idea is that the computer metaphor gives us a satisfying way of explaining what human thought and reasoning is and does so in a way that is compatible with physicalism. The idea, popular in philosophy and cognitive science since the 1970s, is that there is a kind of language of thought which brain states instantiate and which is similar to a natural language in that it possesses both a grammar and a semantics, except that the representations in the language of thought have original intentionality, whereas the representations in natural languages (like English and Spanish) have only derived intentionality. One central question in the philosophy of mind concerns how these “words” in the language of thought get their meaning? We have seen above that these representations can’t just be mental images and there’s a further reason why mental images don’t work for the computer metaphor of the mind: mental images don’t have syntax like language does. You can’t create meaningful sentences by putting together a series of pictures because there are no rules for how those pictures create a holistic meaning out of the parts. For example, how could pictures represent the thought, Leslie wants to go out in the rain but not without an umbrella with a picture (or pictures)? How do I represent with a picture someone’s desire? Or how do I represent the negation of something with only a picture? True, there are devices that we can use within pictures, such as the “no” symbol on no smoking signs. But those symbols are already not functioning purely as pictorial representations that seem to represent in virtue of their similarity. There is no pictorial similarity between the purely logical notion “not” and any picture we could draw. So whatever the words of the language of thought (that is, mental representations) are, their meaning cannot derive from a pictorial similarity to what they represent. So we need some other account. Philosophers have given many such accounts, but most of those accounts attempt to understand mental representation in terms of a causal relationship between objects in the world and representations. That is, whatever types of objects cause (or would cause) certain brain states to “light up,” so to speak, are what those brain states represent. So if there’s a particular brain state that lights up any time I see (or think about) a dog, then that is what those mental representations stand for. Delving into the nuances of contemporary theories of representation is beyond the scope of this chapter, but the important point is that the language of thought idea that these theories support is supposed to be compatible with physicalism as well as the computer analogy of explaining the mind. On this account, the “words” of the language of thought have original intentionality and thinking is just the manipulation of these “words” using certain syntactic rules (the “program”) that are hard-wired into the brain (either innately or by learning) and which are akin to the grammar of a natural language.

Bình luận