A. M. Turing's, Computing Machinery and Intelligence (Mind, 1950), has become something of a classic in the computer metaphor/cognitive science tradition. It begins:
I propose to consider the question "Can machines think?" This should begin with definitions of the terms 'machine' and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think' are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd.
Although challenging, and at times pretty damn frustrating, I would like to suggest that it may be neither dangerous nor absurd to pay heed to "the normal use of words" when addressing questions such as, "Can machines think?". That's because, taking my cue from Freud and Jung, it is at least possible that we know more than we realize -- that we know without knowing that we know, if you like -- or, at least some of us do. But if so, then it is also conceivable that the meaning conveyed by some of our words might well reveal the cumulative wisdom of untold numbers of bright people whose subconscious refinements in how they were used were ceased upon (also subconsciously, more often than not) by their fellow humans.
In other words, I am suggesting that, in much the manner in which improvements in physical fit are transmitted by DNA, so too, it is possible that improvements in cognitive fit are transmitted by language. Call it a belief in semantic evolution, if you like. But the point here is that much of the shaping, supplanting and transmission of our ideas may have transpired at subconscious levels where cognitive fit has been able to mature uncontaminated by the whims of fickle human concerns. That would be important because, if true, then there is reason to suspect that language may well contain invaluable information which can only be retrieved by paying especially close attention (including statistical surveys and the like) to the semantic implications of "the normal use of words".
At the risk of sounding reactionary, I would also like to suggest that we might refer to this activity as "philosophizing", but only to the extent that we are careful to distinguish it from other activity by the same name in which, taking its cue from the likes of Turing, no doubt, the meanings of words are frequently tailored to fit the theories (e.g., mind, psychology, intelligence, rationality, meaning, empirical, etc.) rather than the other way around. But, irrespective of what we decide to call it, since we will no longer be able to assume we have a license to tamper with our semantic ecology, our words will have to be chosen with care. Accordingly, I propose we replace Turing's question with another, which is closely related to it, but which I believe is more likely to take us to where we want to go, "What is rationality?"
The answer, I believe, is simple. One or more of our common sense impressions about rationality must be mistaken Apparently, not all of our memes (Dawkins) have been selected for their representational merit alone. But if some of these intuitions are not to be trusted, which ones? Why those we have most reason to view with suspicion of course. And so the major common sense intuition I have chosen to discount in my own approach is one which I presume we would have irrespective of its cognitive or epistemic credentials. Accordingly, I will abandon the assumption that striving to fulfill a natural inclination is synonymous with striving to attain a rational objective. And that would include the most central natural inclination of all, personal well-being, personal survival, etc. In short, I will abandon the assumption that rationality necessarily entails self- interest.
While some might wonder how such a position can be taken seriously, it turns out that, although deeply entrenched, the self-interest assumption is far from universal. For example, what might be referred to as moralist-like points of view can be found in the ethical theories of Bentham, Mill, Kant, etc. While it is true that these are theories about morality, they, nonetheless, usually entail the implication that there are occasions when it is rational to maximize the well-being of others at one's own expense.
There is also a surprising number of defectors among the lay public. For example, in my own experiments with friends and acquaintances, I have found that roughly 50% come down on the side of a moralist-like point of view when contemplating a dilemma in which one must choose between saving one's self and, let us say, two defenseless children. This is downright astonishing when you consider our evolutionary heritage and the effect it should have had on our thinking about the rationality of looking out for ourselves:
...Special concern for one's own future would be selected by evolution. Animals without such concern would be more likely to die before passing on their genes. Such concern would remain as a natural fact even if we decided that it was not justified. By thinking hard about the arguments, we might be able briefly to stun this natural concern. But it would soon revive. ...The fact that we have this attitude cannot therefore be a reason for thinking it justified. Whether it is justified [e.g., rational?] is an open question, waiting to be answered (Parfit, p. 308).
For the remaining skeptics, I suspect someone who has begun to have reservations about the self-interest assumption might respond with something like:
The problem, my dear Brutus, is not in our rationality, but in ourselves. Since we are fairly egotistical creatures, we have mistakenly presumed ourselves to be far more rational than we really are. We have set ourselves up as some sort of standard of rationality (the so-called rational animal) rather than seeing ourselves as guideposts in the direction rationality is heading. And, although our intuition that a rational creature must be self-interested is very strong, it is comprised predominantly of an emotional component. It is the result of millions of years of psychical shaping to make us efficient at perpetuating our genetic blueprints, which may or may not be a rational objective in any clearly understood sense of the term. In short, when it comes to the rationality of self-interest, you probably shouldn't trust the untutored opinion of a naturally selected organism any further than you can throw him.
For example, Nathanson has pointed out that 1. "the means/end conception is probably the most widely held view of rationality among philosophers and social scientists" (p. 81) and 2. "according to [this conception], the essence of rationality is efficiency" (p. 91). Notice that this is not the weaker thesis that efficiency is a manifestation of rationality, and can therefore be employed as something of a barometer of its presence which, even in this weaker form, would probably be subject to exceptions. Rather, efficiency is the very essence of rationality. Rationality therefore has little to do with minds, consciousness, or even a capacity to reason as we all so naively assumed. Taken literally (and combined with the self-interest assumption), I would imagine that it implies that, should insects one day inherit the earth, it was not the inefficient survivor man who was the rational animal, but rather termites and lady bugs.
Fortunately, our collective intuitions (if not academic good sense) seem to see through this sort of foolishness. For example, in my experiments with friends and acquaintances, I have yet to find a single individual who thinks that 'being rational' constitutes something of a synonym for 'being efficient' (since it seems to me that that is what being the essence of something entails). In fact, most were downright surprised to learn that that is what most professional students of the subject actually believe.
Nor have I been able to find any philosophical arguments capable of convincing me that the ordinary meaning we have come to attach to this term is simply wrong. In my review of Hume, whose passage about the subservient role of reason to passion is often cited in support of the means/end notion (e.g., Nathanson, pp. 29, 81), I was astonished to find, not a justification, but a refutation. The subservient role he saw for reason was not because he identified rationality with efficiency, but rather because he saw it as a strictly cognitive concept. In Hume's own words, "actions may be laudable or blamable; but they cannot be reasonable" (Treatise, III.1.1).
Whether or not one happens to agree is irrelevant. What is relevant is that the fount of wisdom often cited in support of an otherwise implausible notion would have dismissed it as a meaningless concept. And so, in the absence of any support from either ordinary intuition or philosophical argument, my proposed methodology leaves me with no choice but to abandon the efficiency assumption -- in Hume's words, "to commit it to the flames: for it can contain nothing but sophistry and illusion".
Nowadays the logic assumption continues alive and well as the implied foundation of the various computational models of mind which are currently all the rage, as well as being at least tacitly adopted by many of those working in strategic logic (e.g., normative decision theory, game theory, etc.) who frequently construe themselves as engaged in the study of paradoxes in rationality (Newcomb's problem, prisoner's dilemma, etc.). However, to the extent that ordinary intuitions do, indeed, convey relevant information, there is reason to assume that the endeavor to reduce rationality to logic will not succeed. This is because, in everyday usage, we seem to employ rationality terminology and logic terminology in somewhat different contexts. Although these notions do appear to be related in some manner, they by no means appear to be interchangeable or synonymous.
Nor is ordinary intuition the only basis for skepticism on the issue, at least not to the extent you assume that 'being rational' has something to do with reasoning. Among the various specialists expressing reservations about any strong linkage between logic and reasoning we find Russell, Schiller, M. R. Cohen, Nagel, Henle, Harman, and Brown, to name a few. There is also clinical data suggestive of a similar conclusion (e.g., Kahneman & Tversky, Nisbett & Borgida, Slovic, Fischhoff & Lichtenstein, etc.) in which ordinary test subjects have repeatedly demonstrated an appalling obliviousness to established rules of inference when engaged in problem solving experiments. Interestingly enough, the clinicians conducting these experiments have reported the results as having bleak implications for human rationality (as reported by L. J. Cohen) rather than for computational models of mind, which gives you some idea of just how deeply entrenched the computational hysteria has become. In any event, it seems to me that the tendency to construe 'being rational' as just another way of talking about 'being logical' is another one of those assumptions that a rationologist need not take very seriously.
To me, this proposal looks promising. Not only does it have the merit of reducing reasoning to a single type of operation, but it is also compatible with an evolutionary scenario. It doesn't take a great deal of imagination to see how simple conditioning (enumerative induction), which I would define as the cognition of obvious similarity and difference (paradigmatically, the cognition of a consistently reoccurring event sequence), could have evolved into reasoning, which I would define as the cognition of abstruse similarity and difference in remote regions of experience. If reasoning is simply a matter of comparing, then conditioning and reasoning can be viewed as similar operations distinguished mostly in terms of degree and it becomes fairly easy to understand how one could have evolved from the other.
In spite of these virtues, the Humean view seems to have gone largely ignored. While this is probably due, in part, to Hume's own lack of consistency (Norton), I suspect its more because if its incompatibility with the physical reductionism (physical romanticism is more like it) which seems to have taken psychology and philosophy by storm. Obviously, if your primary objective is to reduce mind to matter via the computational thesis, you're hardly going to stay up nights trying to get Hume's simple notion to fly. This is because, it is only with respect to certain so-called "types" of reasoning (e.g., deductive and probabilistic) that logic seems to have made much headway. And so you'll want to preserve the sanctity of these various "types" of reasoning, and interpret them in a far more literal sense than ordinary intuition requires. For example, you're going to want to convince yourself that the reasoning is actually under way while the logical operations are being performed. And, of course, you're also going to want to blur the distinction between the notion of reasoning and such notions as argument, implication, entailment, etc. (see Thagard for clarification).
In contrast, if Hume is to be taken seriously, then deductive and probabilistic reasoning are just reasoning about deduction and probability, with the distinction between them and, let us say, geography reasoning, or tennis reasoning, being simply a matter of the generality of the cognitions involved. Of course, this means that Holmes' brilliant deductions (according to Watson) will have to be reinterpreted as brilliant reasoning (occurring during so-called Aha! experiences, I presume) with respect to the discovery or inventive application of deductive rules. And the actual application of those rules will have to be interpreted as a manifestation of reasoning which has already transpired (e.g., the given in a syllogism). This would be analogous to the behavior of a conditioned organism in which, let us say, the learned completion of a maze is not the conditioning itself, but rather a manifestation of conditioning which has already taken place. As such, we would then construe the application of a deductive or probabilistic rule as more akin to remembering than to reasoning. And it seems to me that this might also account for the widely held opinion, going all the way back to Plato's Meno, that deduction only serves to clarify or strengthen what is already known.
For me, the bottom line is probably that Hume was inconsiderate enough to suggest that reasoning might be something no one wants it to be. Since the ability to cognize similarity and difference (reasoning, in my opinion) makes us particularly adept in the cognition of order taking the form of rules, principles and the like (logic), it should come as no surprise that in the stampede to understand reasoning itself, many, including the computationalists, have allowed themselves the luxury of assuming that it can be comprehended in cognitively convenient terms. But the simple elegance we find in nature is rarely so vulgar and immediately obvious.
In contrast, an operation which might be variously described as comparing, cognizing abstruse similarity and difference, or drawing analogy, suggests a procedure more intuitive than discursive in nature, and which perhaps explains why we have it on fairly good authority that there is neither an inductive logic nor a logic of similarity (Harman and Quine and contra Carnap and Jeffrey). Nor, as an old semantic evolutionist, am I likely to let it go unnoticed that, if Hume was right, then we have reason to suppose that the heart and soul of reasoning and, I would presume, of a rational system, is one which is likely to be ana- logical in nature.
If taken seriously, this ocular metaphor suggests that 'being rational' is not so much about doing something as it is about being something (e.g., conscious, perceptive, lucid, aware, etc.) -- that it refers more to a psychical state of affairs than to an operation of some sort. For example, although a trifle overstated, no doubt, I suspect that most folks would say that it has something to do with 'being in touch with reality'. And since I have already abandoned the assumption that 'being rational' entails an appropriate strategy or sequence of operations, I might as well go all the way and abandon the assumption that it entails that one must be engaged in reasoning. Accordingly, and in keeping with my own intuitions resulting from decades of employing the expression and observing how others employ it, I will assume that it simply entails 'being in an "appropriate" state of mind'.
While I regard this as a central point, it is not surprising that it is not a view maintained with any consistency by other theorists since absolutist ascriptions are the norm in everyday discourse (e.g., Matilda is being irrational). It also means that I will have no choice but to assume that (a) either absolutist ascriptions sanctioned by normal usage are in err, or (b) they are a form of shorthand for referring to the extent to which someone is or is not measuring up to the norm. This is somewhat similar to the position taken by other theorists (Nathanson, Brown, etc.), although perhaps a little less patronizing.
Since my proposed methodology is based upon treating "the normal use of words" as more or less sacrosanct, my reinterpretion of this facet of normal usage to suit my own purposes is not something I take lightly. However, in all fairness, I should point out that my most basic contention is that, among our common sense impressions, those pertaining to estimations of the quantity and quality of human rationality are among the least trustworthy (The Self-Interest Assumption). And so my questioning of the validity of absolutist ascriptions when taken literally can be construed as simply an extension of one of my most primitive assumptions.
This also allows for a fairly straight-forward means of drawing a distinction between computation and re-cognition (another way of talking about being able to "see") as the difference between the capacity to run a program (e.g., a television in freeze frame) and the capacity to "see" a program (e.g., a whole screenfull of dots conceived as a picture). Or, if you prefer, we could think of it as the difference between a phenomenally fast (ten million dots/sec) phenomenally stupid (limited to differentiating between on and off) switch, such as an electron gun or a microprocessor, and a compresent switch, such as your typical couch potato.
Nor would you have to be a genius to figure out that, while the former would be determined by its program, the latter might become capable of evaluating and determining some of its programs (e.g., by switching the channel), at least once it "sees" them. And since this might include some of the programs which determine one's own behavior (e.g., fear, anger, sex, etc.), the analogy suggests an inverse correlation between 'being rational' and 'being determined' and therefore, one would presume, between 'being rational' and 'being a machine'. (Oops! HAL won't be a computer?)
Its also interesting to note that what I appear to be talking about with respect to "seeing" a whole screenfull of dots is just another way of talking about "seeing" what the whole screenfull of dots "means". This suggests that a holistic theory of rationality might well become a theory of semantics at its higher levels of speculation. And if we were to expand the periphery to include all the screenfulls of dots occurring in several hours of time, some of that meaning might also include 'being able to "see" what is going on' with respect to the artful mastery of a David Lean or a Stanley Kubrick, or perhaps the exotic wit of a Monty Python. If so, then we might also expect ourselves to stumble on various types of rationality (e.g., valuative, aesthetic, humouric, etc.) more involved with content than with form.
Summarizing:
I have abandoned the assumption that 'being rational' entails an appropriate strategy or sequence of operations including:
I have adopted the assumption that 'being rational' is involved with holism, which I have taken to include: