From: "Phil Roberts, Jr."Subject: Re: Q. on Penrose arg: why new *physics*? Date: Wed, 06 Aug 1997 14:59:19 -0400 Message: 33e8c987 JRStern wrote: > > On Tue, 05 Aug 1997 22:27:40 -0400, "Phil Roberts, Jr." > wrote: > >JRStern wrote: > >> ... A truckload of > >> compatible (and all controversial) theories do not constitute the > >> empirical evidence you claimed, that got us going. > > > >Ahmen. But then most of us are able to distinguish between > >evidence (e.g., feelings of worthlessness) and theory (e.g., > >the contention that they are "caused" by too much rationality). > > The alternative theory being that they are caused by just enough > rationality, of a possibly mistaken nature. > Just enough to result in suicide being the second leading cause of death among teenagers who have not yet reached their reproductive prime. What's your idea of "just enough"? On the other hand, you are at least hinting at something which I have actually worked out in detail, i.e., that nature has selected for one type of rationality (i.e., cognitive) which is adaptive but, unfortunately, has begun to pick up _too much_ of another type (valuative) which, in excessive amounts (beyond what is necessary for prudence) is maladaptive. In other words, morality and emotional instability (both maladaptive) are construed as valuative by products of the evolution of "strategic" rationality. Or, in slightly more detail, here's that theory of rationality I was talking about, and which I know you are just dying to see. (actually not, since it is obvious you have been hoping against hope to lightly dismiss me as a crackpot). The Moralization Mechanism Why We Turned Out Like Captain Kirk Instead of Mr. Spock Phil Roberts, Jr. One of the slipperiest terms in the philosophical lexicon, 'rationality' is many things to many people (Plantinga). In !The Autonomy of Rationality!, I have suggested that, with a little reading between the lines, Lucas (1961) can reasonably be construed as arguing that rationality can not be mechanized and, conversely, that rational creatures are not machines. I have then endeavored to substantiate the Lucas thesis by pointing to a number of anomalies in human nature which, I believe, can best be explained in terms of the Lucas implied potential for a rational species to transcend naturally selected formalisms. In so doing, I have relied on two theories, Peter Singer's explanation of anomalous altruism, and my own account of the anomality of emotional need and disorder as presented in another of my papers, !Rational Negativism!. Assuming that both Singer and myself are on the right track, and assuming that both theories can indeed be construed as supporting Lucas, a reassessment of the notion of rationality would seem to be in order. Accordingly, taking my cue from Lucas' heavy reliance on the metaphors of vision and "standing outside the system", I will begin by assuming that rationality is holistic and will therefore appear as forever more encompassing, i.e., open-ended. This also implies that, whatever rationality turns out to be, it probably won't be conceptually constrainable within the context of that which itself is not holistic, such as concrete rules, systems, objectives, definitions, etc. Constrainable or not, I will have to arrive at enough of a definition to provide some means of identifying what it is I assume I will be reassessing, even if its unconstrainable nature will eventually render that definition obsolete. (Far out! I'm about to propose a theory which predicts its own demise.) And, in this regard, I believe the definition itself should conform to at least one constraint -- it should, as much as possible, reflect the collective wisdom of our common sense impressions conveyed by the ordinary use of language. Of course, trying to balance this constraint with the holistic assumption is not without its problems and, indeed, there are two notable departures which I will do my best to justify. !The Self-Interest Assumption!: In point of fact, the meaning of the term, 'rationality', has actually received a considerable amount of attention, but with little in the way of a satisfactory consensus to show for it. And the reason for this, I believe, is fairly simple. One of our common sense impressions about rationality conveyed by the everyday use of language !must be mistaken!, i.e., the widely held impression that rationality necessarily entails self-concern or self-interest, and can therefore be constrained within the context of mother nature's fixed objective. While my abandonment of the self-interest assumption may seem a bit too drastic for some, it turns out that, although deeply entrenched, this assumption is far from universal. For example, more altruistically oriented views of rationality can be found in the ethical theories of Bentham, Mill, Kant, etc. While it is true that these are theories about morality, they, nonetheless, generally entail !the implication! that it is !rational! to maximize the well-being of others at one's own expense. There is also a surprising number of defectors among the lay public. For example, in my own experiments with friends and acquaintances, I have found that roughly 50% come down on the side of self-sacrifice when contemplating various moral dilemmas.!1! This is downright astonishing when you consider our evolutionary heritage and the effect it should have had on our thinking about the rationality of looking out for numero uno. ...Special concern for one's own future would be selected by evolution. Animals without such concern would be more likely to die before passing on their genes. Such concern would remain as a natural fact even if we decided that it was not justified. By thinking hard about the arguments, we might be able briefly to stun this natural concern. But it would soon revive. ...The fact that we have this attitude cannot therefore be a reason for thinking it justified. Whether it is justified [e.g., rational?] is an open question, waiting to be answered (Parfit). In short, when it comes to the rationality of self-interest, !you probably shouldn't trust the untutored opinion of a naturally selected organism any further than you can throw him!. !The Irrationality Assumption!: In every day discourse, it is not uncommon to question another person's rationality by referring to him or her as irrational. This, in turn, implies that those who are not irrational are, in fact, rational. However, if, as I have read Lucas as implying, rationality is holistic, and therefore open ended, then any reference to someone's rationality must always be in relative terms, e.g., X is more or less rational than Y.!2! Accordingly, I have no choice but to assume our every day practice of referring to rationality in absolutist terminology is either in err, or must be reinterpreted as shorthand for referring to the fact that such an individual's rationality is or is not sufficient to be considered roughly normal. !Rationality!: As for the remainder of our common sense impressions, I believe that any serious definition, holistic or otherwise, should (a.) depict rationality as exhibiting some fairly direct relationship to reasoning (ratio in Greek) and (b.) should convey some sense of the qualitative feature which is at least subconsciously implied (if not always consciously understood) in our everyday assessments, absolutist or otherwise. With respect to (a.): I believe it can be adequately accomplished simply by treating the term 'rationality' as referring to 'the product of reasoning', particularly if taken to include the reasoning of others, and therefore taken to include the ideas, points of view, etc. of one's culture. However, since air pollution and lava lamps are also products of reasoning, this aspect of my definition will have to be narrowed a bit, in which case 'the psychical product of reasoning' might better conform with our collective impressions. As for reasoning itself, you probably can't do much better than Hume. For those unfamiliar, it was his opinion that all reasoning is simply comparing (e.g. !Treatise!, I.3.2 and !Enquiry!, I.3.2). With respect to (b.): Rather than trying to hit a moving target with a concrete bullet, I will once again defer to Lucas, and his frequent references to the visual metaphor. I will therefore assume that rationality correlates with the extent to which it enables one to "see" (figuratively, of course). Tying both (a.) and (b.) together, the evaluational assessment of someone's rationality would then be construed as referring to 'the extent to which the psychical product of reasoning serves to maximize one's mental ocularity'. I assume this also implies that 'being rational' (relatively) is simply a matter of 'being able to "see" what is going on' (relatively speaking) or, less metaphorically, 'being (relatively) objective'. !The Map!: Since I am assuming that rationality is a !psychical! product, I will want to have it comprised of the same stuff minds are made of, namely, beliefs and values. And since I will be referring to something with holistic properties, I will be heavily dependent on a visualizable representation, in this case, a follow-the-dots diagram (or a rough approximation thereof) in which the lines will represent beliefs and the darkness of the lines will represent value. With this conceptual tool in hand, the two facets of rationality (beliefs and values) will be represented as follows: Cognitive component: 1. Represented by the extent to which the lines have been correctly connected and therefore !correspond! with a correctly completed diagram having no determinable boundaries. 2. Represents the extent to which one's beliefs serve to maximize one's mental ocularity or the extent to which one's beliefs correspond with reality (whatever that means) or the extent to which one's beliefs constitute knowledge.!3! 3. In the idiom of 'being rational', this component correlates with the extent to which it endows one with the potential for being cognitively objective or 'being smart'. For simplicity's sake, examples of cognitive maps (below) will presuppose them to be artificially restricted to representations of categorical equivalents, i.e., members of a class. Valuative component: 1. Represented by the extent to which the lines (which are representations of categorical equivalencies) are equal in darkness. 2. Represents the extent to which one's values serve to maximize the scope or focus of one's mental ocularity or the extent to which one values equivalent "objects" equivalently. 3. In the idiom of 'being rational', this component correlates with the extent to which it endows one with the potential for being valuatively objective or 'being good'.!4! !The Representation of Self-Interest!: Diagram A (below) corresponds with a relatively correct and complete diagram and is intended to represent someone's relatively correct and complete understanding of the nature of a typical human being. Diagram B is intended to represent someone's less correct and complete set of beliefs on the same subject. As such, in relation to B, diagram A represents a greater potential for 'being able to "see" what is going on' on those occasions when this region of the map has to be consulted. It therefore represents an increase in rationality relative to the rationality represented in B. Diagram C is comprised of a number of configurations like the one in Diagram A, but which have been connected to each other by dots, in this case asterisks, intended to represent associative junctures. In this regard, diagram C is intended to represent someone's relatively correct and complete understanding of human beings in general. The configuration in the middle, labeled with an 'X', represents the individual's understanding of his or her own interests and concerns (and taken to include those of immediate kin) and the surrounding configurations the individual's understanding of the interests and concerns of others. To the extent we are the product of natural selection, Diagram C is also a representation of the sort of cognitive profile we would expect to find in ourselves. That's because, it is reasonable to assume that smart organisms survive better than dumb ones, and therefore that nature has been selecting to maximize the potential for 'being smart'.!5! | | | | | --z--*--z--*--z--*--z--*--z-- | | | | | | --x-- * * * * * | | | | | | --z--*--y--*--y--*--y--*--z-- Diagram A | | | | | * * * * * | | | | | --z--*--y--*--X--*--y--*--z-- | | | | | / * * * * * --x-- | | | | | --z--*--y--*--y--*--y--*--z-- | | | | | Diagram B * * * * * | | | | | --z--*--z--*--z--*--z--*--z-- | | | | | Diagram C On the valuative side, both common sense and the theory of kin selection lead us to expect organisms which are ruthlessly selfish (Hamilton, Dawkins, Campbell). That is to say, we should expect naturally selected organisms to place paramount importance on their own concerns and interests, and none whatsoever on the interests and concerns of others.!6! However, it is important to understand that, with respect to this valuative profile, I am not suggesting it is the one we actually have, nor am I suggesting that it is one we "ought" to have as a matter of general principle, but merely that it is the one we "ought" to have if evolutionary theory is correct, as I have already explained in !The Autonomy of Rationality!. Since value is represented by the darkness of the lines, this would be represented in Diagram C by having configuration 'X' comprised of lines which are as dark as possible, and the other configurations comprised of lines which are as light as possible. In vernacular terms, this amounts to the conclusion that nature has been selecting to maximize the potential for 'being bad'. !Strategic Rationality!: Combining the conclusions and representations for both components, we would conclude that nature has been selecting for a maximal rationality with respect to the cognitive component, and a minimal rationality, or perhaps more correctly, a maximal !ir!rationality, with respect to the valuative component. In vernacular terms, nature has been selecting for organism which are smart and bad. When combined with yet another conclusion, namely, that before you can have a value, you have to have a belief, and therefore that value resides on belief, you arrive at the conclusion that nature has been selecting for a maximal valuative irrationality (of sorts) superimposed on a maximal cognitive rationality -- a pattern one might reasonably expect on any occasion in which rationality is constrained within the context of a fixed objective, and therefore in which success is measured, not in terms of 'being able to "see"', but rather in terms of 'being efficient' (e.g., the means end theory, natural selection, etc.). !The Moralization Mechanism! or !On Why We Turned Out Like Captain Kirk Instead of Mr. Spock!: While I have endeavored to employ the model to represent both the valuative and cognitive profiles we might expect to see if evolutionary theory is correct, it should be apparent that there is a considerable disparity between reality and prediction. Not only are we more altruistic than our theory predicts (Mother Teresa, Albert Schweitzer, war heroes), but we are also a species racked with emotional instability, most of which I believe can reasonably be construed as resulting from a deficiency in self-worth. As such, we can represent these two anomalies by simply assuming there is less value in the 'X' configuration than the predicted profile, and more value in the peripheral configurations than the predicted profile. In terms of our representation, this means that configuration 'X' will have lines which are no longer as dark as possible, and the peripheral configurations will have lines that are no longer as light as possible. Since in the holistic theory I have proposed, a valuative rationality is correlated with the extent to which categorical equivalencies are valued equally, and since the proposed adjustments to the predicted profile amount to adjustments in the direction of greater valuative equality or valuative objectivity, the two anomalies, when taken together, can be construed as amounting to an increase in valuative rationality. But rather than a physical explanation, I prefer a psychodynamic one, in which we simply assume that the selected increase in cognitive rationality has resulted in a leakage or a linkage, and produced a rationalizing effect on the valuative !ir!rationality being selected for. In terms of the model, we would think of this as a psychodynamic mechanism in which a massive increase in associative junctures (the dots) results in a reduced resistance to valuative flow from regions of high concentration to regions of low concentration, in this case, in an outward direction from configuration 'X' toward the outlying configurations. In real terms, I am simply referring to the fact that an increase in cognitive rationality results in a more holistic or comprehensive platform from which valuative assessments can be made, or if you prefer, an increase in cognitive objectivity causes an increase in valuative objectivity beyond what you are designing and constraining for. !Epistemic Virtues! 1. The synthesis of natural science and the humanities, in that it is possible to "explain" most previously anomalous human behavior (self-worth related) in naturalistic terms (as the by- product of the evolution of rationality). 2. The synthesis of the twin anomalies of morality and emotional instability to a single "thing", i.e., valuative rationality. 3. Introduction to the crude beginnings of an honest-to- god _science_ of the mind, at least to the extent Kuhn has gotten some of it right, in that the theory addresses a _psychical_ anomaly (feelings of wothlessness). 4. The absence of rationality paradoxes, such as Newcombs Problem and prisoner's dilemma, in that strategic rationaities are construed as hybrids of rationality and irrationality. 5. The ability to eliminate rational irrationality (a la Parfit, etc.) for the same reason. In other words, with the single exception of the theory above, _all_ theories of rationality on the books are self-defeating. 6. Resolution of the 2000 year old dilemma of justifying morality, i.e., by bringing the mountain to Mohammed, i.e., morality = rationality. ---------- Footnotes ---------- !1! One of my favorites: Imagine yourself on the deck of the Titanic just as its about to take the big plunge and that there are two little kids to whom you bear no particular relation other than as fellow members of the same species. And imagine, if you will, that there is room in the last life boat for either one adult, in this case yourself, or the two little kids. Which would be rational -- saving yourself -- or saving the two children? !2! This would also explain why asking a question such as !"Can Human Irrationality Be Experimentally Demonstrated?!" (Cohen, 1981, !The Behavioral and Brain Sciences!) has resulted in controversy in that it presupposes the possibility of an absolute answer to that which can only be addressed in relative terms. !3! My main influence in epistemology is A. R. White, !The Nature of Knowledge!, 1982, in which knowledge is viewed, not as a matter of justified true belief, but merely as a matter of right representation. However, I am also of the opinion that coherence plays a crucial role, representable in the model by assuming that cohering lines which correspond with a correctly completed diagram count for more than noncohering ones. !4! Actually, 'being good' is only half of the picture, in that a reduced self-value would also constitute an increase in valuative objectivity, and therefore valuative rationality as I am construing it. As such, the counter-part to 'being good' in the valuative rationality profile is 'being emotionally unstable'. !5! Although I find it hard to take seriously, this contention has actually been challenged, for example by Stich, !"Could Man Be An Irrational Animal"!, !Synthese!, Vol 64. But most of these challenges are based on pointing to specific instances in which cognitive !in!competence is survivalistically advantageous (or the converse), and then extrapolating to a general conclusion about the entire affair. Let me put it this way, faced with betting on a competition between two individuals with equal physical attributes, one who is smart and one who is stupid, who would you want to put your money on? !6! Its easy to get off the track on this, by assuming that concern for others might maximize self-interest through co- operation, reciprocal altruism, etc. However, co-operation merely involves a more elaborate strategy (the cognitive component), not a change in values. This matter is just the kin selection issue, which has been exhaustively explored by others and which I have already addressed in !The Autonomy of Rationality!. ..op ---------- Circulated References ---------- Donald Campbell, !On the Conflicts Between Biological and Social Evolution and Between Psychology and Moral Tradition!, !American Psychologist!, Dec. 1975. Richard Dawkins, !The Selfish Gene!, 1976. W. D. Hamilton, !The Genetic Evolution of Social Behavior!, !Journal of Theoretical Biology!, 7, 1964. J. R. Lucas, !Minds, Machines and Godel!, !Philosophy!, Vol XXXVI (1961). Reprinted in Anderson's, !Minds and Machines!, and engagingly explored in Hofstadter's Pulitzer prize winner, !Godel!, !Escher!, !Bach!: !An Eternal Golden Braid!. Derek Parfit, !Reasons and Persons!, 1984. Alvin Plantinga, !Warrant: The Current Debate!, 1993. Peter Singer, !The Expanding Circle!, 1981. ---------- Uncirculated References ---------- Phil Roberts, Jr., !The Autonomy of Rationality!, unpublished zerox submitted to the Society for Philosophy and Pschology (the SPP) for consideration for their 22nd annual meeting, 1996. Phil Roberts, Jr., !Rational Negativism: A Divergent Theory of Emotional Disorder!, unpublished zerox submitted to the Society for Philosophy and Psychology for consideration for their 7th and 8th annual meetings, 1981, 1982. -- Phil Roberts, Jr. Feelings of Worthlessness from the Perspective of So-Called Cognitive Science http://www.geocities.com/Athens/5476