From: Owleye"Phil Roberts, Jr." wrote: > A Sketch of a Divergent Theory of Emotional Instability > > > Objective: To account for self-worth related emotion (i.e., needs for > love, acceptance, moral integrity, recognition, achievement, > purpose, meaning, etc.) and emotional disorder (e.g., depression, > suicide, etc.) within the context of an evolutionary scenario; i.e., to > synthesize natural science and the humanities; i.e., to answer the > question: 'Why is there a species of naturally selected organism > expending huge quantities of effort and energy on the survivalistically > bizarre non-physical objective of maximizing self-worth?' This is all very presumptuous. I would hope that the above characterizations, particularly, the part hidden in the "etc." is sufficiently understood to be able to approach the problem of synthesis that you take as your challenge. > Observation: The species in which rationality is most developed is > also the one in which individuals have the greatest difficulty in > maintaining an adequate sense of self-worth, often going to > extraordinary lengths in doing so (e.g., Evel Knievel, celibate monks, > self-endangering Greenpeacers, etc.). It is also the species in which language is most developed, creativity, technology, spirituality, and a great many other things. This business of making use of "a sense of self-worth" is entirely too psychological for my purposes, but I'll try to stay awake. Perhaps some day you will define for me what this means. If "sense" means something like a feeling, I might imagine a pharmacological solution. > Hypothesis: Rationality is antagonistic to psychocentric stability (i.e., > maintaining an adequate sense of self-worth). This seems counterintutive, but again I'll try to stay with you. > > > Synopsis: In much the manner reasoning allows for the subordination > of lower emotional concerns and values (pain, fear, anger, sex, etc.) > to more global concerns (concern for the self as a whole), so too, > these more global concerns and values can themselves become > reevaluated and subordinated to other more global, more objective > considerations. And if this is so, and assuming that emotional > disorder emanates from a deficiency in self-worth resulting from > precisely this sort of experiencially based reevaluation, then it can > reasonably be construed as a natural malfunction resulting from > one's rational faculties functioning a tad too well. Well, as Sartre says, "We are condemned to be free!" I guess you have a point. However, I have the feeling that it depends on whether the rationality is being used virtuously or viciously. When we have done our duty we presumably preserve our dignity in doing so, thereby making virtue its own reward. But a medical doctor, can, for example, use her skill rationally to preserve life or to do harm to it. > > > Normalcy and Disorder: Assuming this is correct, then some > explanation for the relative "normalcy" of most individuals would > seem necessary. This is accomplished simply by postulating > different levels or degrees of consciousness. From this perspective, > emotional disorder would then be construed as a valuative affliction > resulting from an increase in semantic content in the engram indexed > by the linguistic expression, "I am insignificant", which all persons of > common sense "know" to be true, but which the "emotionally > disturbed" have come to "realize", through abstract thought, > devaluing experience, etc. Whoa! You are going way too fast here. There is way too much unconnected speculation all of which could be challenged. I realize that space considerations prevent the kind of argument you would need here, but perhaps we can take it one step at a time. What is normal? What is disorder? > > > Implications: So-called "free will" and the incessant activity presumed > to emanate from it is simply the insatiable appetite we all have for > self-significating experience which, in turn, is simply nature's way of > attempting to counter the objectifying influences of our rational > faculties. This also implies that the engine in the first "free-thinking" > artifact is probably going to be a diesel. Now you are getting absurd. It makes no sense whatsoever. If I say I am acting on the basis of my own free-will (and I surely do say this) I mean by it that I am not being compelled to act. It is the psychological condition of being compelled or driven to act that tells me I'm not acting freely. How does what you say relate to this? > > > > "Another simile would be an atomic pile of less than critical size: an > injected idea is to correspond to a neutron entering the pile from > without. Each such neutron will cause a certain disturbance which > eventually dies away. If, however, the size of the pile is sufficiently > increased, the disturbance caused by such an incoming neutron will > very likely go on and on increasing until the whole pile is destroyed. > Is there a corresponding phenomenon for minds?" (A. M. Turing). It sounds as if Turing is presenting a case for the emergence of some phenomena from the structually-based interactions of smaller constituents. I don't have a problem with emergence. > > > Additional Implications: Since the explanation I have proposed > amounts to the contention that the most rational species > (presumably) is beginning to exhibit signs of transcending the > formalism of nature's fixed objective (accomplished in man via > intentional self-concern, i.e., the prudence program) it can reasonably > be construed as providing evidence and argumentation in support of > Lucas (1961) and Penrose (1989, 1994). Not only does this imply > that the aforementioned artifact probably won't be a computer, > but it would also explain why a question such as "Can Human > Irrationality Be Experimentally Demonstrated?" (Cohen, 1981) > has led to controversy, in that it presupposes the possibility > of a discrete (formalizable) answer to a question which can only > be addressed in comparative (non-formalizable) terms (e.g. X is > more rational than Y, the norm, etc.). Along these same lines, > the theory can also be construed as an endorsement or > metajustification for comparative approaches in epistemology > (explanationism, plausiblism, etc.) Well, I don't see how this follows from your theses. > > > "The short answer [to Lucas/Godel and more recently, Penrose] > is that, although it is established that there are limitations to the > powers of any particular machine, it has only been stated, without > any sort of proof, that no such limitations apply to human intellect " > (A. M. Turing). This is not particularly saying anything. Indeed Goedel's proof of the incompleteness of arithmetic seems ample evidence that human's are not limited by the same things that computers are. Notwithstanding this, consciousness is far from being explained and I think we would need to accomplish this before we can approach the problem. > > > "So even if mathematicians are superb cognizers of mathematical > truth, and even if there is no algorithm, practical or otherwise, > for cognizing mathematical truth, it does not follow that the power > of mathematicians to cognize mathematical truth is not entirely > explicable in terms of their brain's executing an algorithm. Not > an algorhithm for intuiting mathematical truth -- we can suppose that > Penrose [via Godel] has proved that there could be no such thing. > What would the algorithm be for, then? Most plausibly it would be an > algorithm -- one of very many -- for trying to stay alive ... " (D. C. > Dennett). Well, that's Dennett for you. I think he's wrong. owleye