The introduction to this post appeared on NSB yesterday under the title, “Scenes of Coming Attractions…”
I apologize for being taardy in posting comments over the past day or so. Other parts of life have intervened and md it such that my access to the internet is only intermittent, and will probably remain so through much of the weekend.
Thank you for your patience, as they say.
******************************
Chapter 2
Searching for Bedrock:
What Makes Something Good?
These days I enjoy the privilege of talking with a great number of people about the moral challenges we face. When I argue that knowing what’s right and wrong is a difficult challenge, I encounter two main groups who disagree with me, one on each side of the chasm of our current culture war. To one –we might call them the moral traditionalists– morality is easy because there is a clear-cut answer, one that has been handed down to us. To the other –the counterculture moral relativists– morality is easy because the answer is whatever you want it to be.
From one point of view, the two groups are diametrically opposite: one answer vs. many answers, an answer from external authority vs. an answer from internal preference. But from my point of view, the two groups have something quite fundamental in common. Neither group has successfully met the challenge of carrying their thinking down to what I would call “moral bedrock.”
Here’s what I mean by bedrock. If someone says that something is “morally good,” the question can always be asked, “What makes that good?” And then some answer is given, “It is good because…..” And then the question can again be asked, “But what is good about that.” And again an answer, “It’s good because….” And so on. Where can that process of digging stop? The only way it can stop, it would seem, is for the answer to be “It just is.” The challenge is to find a place at which “It just is” is not arbitrary, to find something whose goodness can stand on its own without needing further explanation and justification. Without such a “bedrock,” it seems to me impossible to speak meaningfully about morality.
A first condition of a satisfactory moral philosophy, therefore, must be that it is not arbitrary, that it rests upon a bedrock of something whose goodness does not need to be explained. It just is.
Neither the traditionalists nor the relativists, in my experience, have thought down to bedrock.
God said it.
I do talk radio where I live, in the Shenandoah Valley of Virginia. For four hours each month for the past three years, I’ve discussed with a listening audience of several tens of thousands of people the moral and cultural issues that we face in our own lives and as a nation. This is a conservative area of a conservative state, with a large and vocal component of fundamentalist Christians. As you might imagine, the worldview of the fellow with the microphone, i.e. me –who used to be the young man who tuned in, turned on and dropped out in Berkeley twenty-some years ago– and that of the people calling in to talk with him on the air are not always identical. Nonetheless, we enjoy our conversations together.
My fundamentalist interlocutors would certainly contest my assertion that their moral position lacks bedrock. Bedrock, they think, is their specialty. If we are discussing homosexuality, for example, their position is clear and stated in a tone in which you can almost hear their argument go “clunk” as it strikes bedrock. God says homosexuality is an abomination, therefore –clunk!– it is wrong, and it is our moral responsibility to condemn it. (Some go so far as to endorse the penalty God is quoted as giving for sodomists– to stone them to death.) The clunk of supposed bedrock is captured well by the bumpersticker I’ve seen that reads, “God said it, I believe it, and that settles it.”
God said the husband is to be the lord of his wife, and clunk, that settles it. Why is that a good way for roles to be apportioned? Because the Almighty God, who created the whole set-up, has so decreed it. To many of my callers, if God’s word isn’t bedrock, if the commandment from the One in charge isn’t good enough for “It just is,” what possibly could be? That it settles the question seems to them self-evident.
We all had a wonderful conversation in the summer of 1994 about the question of how one can feel so sure just what God said –especially in view of the fact that the adherents of different traditions are equally sure about God’s having said different things– and about whether we should assume that what God said at one time was necessarily His last word on the subject. But for the present issue, let us set those concerns aside –however important they may be– and assume that we do know what God has commanded.
Would the fact that God said we should do something be sufficient to establish that it would be morally good for us to do it? When I asked that question, in a particular episode of my program I called “Knowing What’s Right and Wrong is not a No-Brainer,” my fundamentalist callers seemed genuinely puzzled that I could even ask such a thing. “Well,” I’d venture further, “are you saying that if there were some All-Powerful Being that created the universe, no matter what he might be like, even with one we would call an evil nature, whatever He might say was good would be good by definition?” It seems hard for them to see that there’s an issue: “Our God is a completely good one,” they contend, “so if He tells us ‘This is good,’ we know it’s so.”
Wait a minute. There’s a problem here. This argument isn’t resting on bedrock. It’s floating in mid-air. For people who have spent their entire lives putting the idea –the absolutely unquestioned idea– of an all-good God at the center of their worldview, it might be difficult to imagine a God who was not good. But a God who is not good is not logically impossible. Indeed, the idea of an Evil Creator has been believed by some religious groups in history. What some people have deeply believed we should at least be able to imagine.
So, if we were to imagine the Cosmos as ruled by some Being with a character like Saddam Hussein, would we then say that obedience to such a God was morally good? I can imagine saying that it is prudent –just as I can imagine keeping my head down if I lived in Saddam’s Iraq– but if you asked me what is “good” about upholding a murderous regime, say, or ratting on my neighbors, or seizing what belongs to someone else, all because I am ordered by a tyrant to do so, I could not find any acceptable answer. No “clonk” of bedrock just because the Guy with the power commands it. To say otherwise would be to say that might makes right, which is to say that we are not talking about right at all.
This notion of God-as-Saddam is irrelevant, the fundamentalist will argue, because that’s not what we’ve got– we’ve got a good God. But this position has two problems. The less important is the empirical one; the more important the logical one.
The empirical problem is that the record of the Bible provides plenty of troubling evidence about God’s moral character. I am willing to grant that the great preponderance of the moral law as handed down through the Scriptures of the Judeo-Christian tradition –from the Ten Commandments on through the Golden Rule– has a quality that is largely consistent with my sense of what is morally right. Don’t murder, don’t steal, treat other people as you would like to be treated– I’ve tried to inculcate such moral precepts in my own three children. But if we are talking about moral bedrock, “most of it” isn’t good enough. If God commanded us to do what is morally wrong even once, “Because God said so” would be disqualified as a sufficient guide for doing what’s right.
It is not difficult to find places where God’s moral stance is in conflict with the moral sense that most of us apply in every other situation. To cite just one dramatic example, as the children of Israel are wandering in the wilderness after the Exodus from Egypt, God commands them to do what nowadays would be called genocide, prosecutable as crimes against humanity. [Quote from Exodus] We are compelled to conclude either 1) that genocide is good, or 2) that God commanded people to do what is morally wrong, or 3) that we cannot trust the record about what God commands. To me, the first notion is completely untenable; it would make a mockery of the idea of “goodness.” Between the other two possibilities, the third is eminently plausible and creates no difficulties of any kind for me. A man in West Virginia not long ago murdered his son, claiming (in all evident sincerity) that God told him to do it, and I don’t implicate the Almighty in that crime either.
But for some of my callers, the fact that it says in the Good Book that God said it means that he did. For them, either of the second two options –God commanding evil, or the Bible misreporting God’s Truth– fatally undermines their fundamentalist belief in their possession of moral bedrock, so they feel compelled to make the case for the first option. So callers have explained to me why it was right for the children of Israel to seize a city and to put to the sword every single man, woman and child. “They were idolaters, you see. And God wanted his Chosen People to be free of their pernicious influences.” If this is the way an all-good being would operate, I say, what’s so good about goodness?
But the empirical problem with God’s goodness goes considerably deeper. Jack Miles has written an extraordinary book, God: A Biography , which systematically explores the character of God as it unfolds through the Old Testament. He does it in a straight-forward and low-key way, so undramatically that I’ve read favorable reviews of the book in conventionally religious publications. But if one reads the book with an open mind, unfettered by our preconceived notions of how wonderful and just and righteous we’ve always been told the Almighty is, the book has a startling and revolutionary impact. When we take an honest look at God’s portrait in the Bible, apparently, His character is surprisingly like that of an unjust, capricious and even terrifying tyrant. God-as-Saddam is not so off-the-wall for a clear-eyed Christian or Jew as our cultural indoctrination would lead us to believe.
But the empirical question about the moral nature of the particular God that people cite as moral bedrock is really secondary. Even if the Biblical portrayal of God were as morally untroubling as one could imagine, even if His own conduct were beyond reproach and His every commandment consistent with our own deepest moral sense, a logical problem would remain in the way of “God said it” being an adequate bedrock for a moral philosophy.
Even if we set the morally problematic character of the specific God-of-the-Bible aside, the logical problem of the “God said it” problem that remains is this: to decide that this God, or any God, is good requires an independent basis for assessing goodness. Either God is good by definition –which requires us to say that even a monster like Saddam could define goodness if He created the universe, which is to define goodness away– or we require an independent definition of the Good. Thus the fundamentalist’s declaration that we should obey the commandments of an all-good God requires that he have some other criterion for defining goodness besides that “God said it.” And so we are back to where we began, with the question: what is it that makes something morally good?
The same is inescapably true for recourse to any other external authority. To allow an external authority to define goodness for us is inevitably an evasion of the moral question. If we say, “To obey the law is good,” are we saying that the law defines the good, or are we declaring that there are no bad laws? But history is full of terrible laws by any humane standard of morality –such as laws requiring citizens to be complicitous in murder, laws allowing torture, laws calling for punishing people who express beliefs most of us now regard as good and true– and if we allow all law to define the good we are left with nothing worth calling a morality. No bedrock there.
Similarly, saying that “Adherence to traditional moral values is good,” is either to say that there can be no such thing as a morally questionable tradition or to say that we have submitted to a moral test those particular traditions that we choose to follow, and that they have passed that test. The first alternative seems indefensible in the face of facts –such as traditions that require the genital mutilation of little girls, or the blood sacrifice of first-born children, or giving the lord of the manor first rights with the brides of his serfs– and thus becomes equivalent to abdicating the moral question. To adhere blindly to tradition is to take the position, “I’m not going to worry about what is morally good. I will simply obey authority, whether it’s good or bad.” To evaluate authority, however, requires that one employ independent moral criteria. And this independence means that one’s morality is no longer founded on authority.
But if no authority can be one’s moral bedrock –not God or law or tradition– then what can?
Pick your own.
In the history of civilization, authority has been a force of great power– sufficient power indeed that the capacity even to think of questioning the rightness of authority has emerged only in some times and places. Ours is certainly one of them. In Western civilization, in the wake of the Enlightenment, the right of the individual to employ his or her own reason to challenge the dictates of authority has been well-established for some centuries, now, even if there remains a powerful force of traditionalist backlash as epitomized by religious fundamentalism.
If “God said it” is the arbitrary pseudo-bedrock of the religious traditionalists, “You don’t have any right to lay your moral trips on me” is the arbitrary and self-contradictory abdication of the whole idea of moral bedrock of a large swath of the counter-culture.
At Prescott College in Arizona, in the mid-1970s, I encountered this kind of moral relativism among my students. This was an era of anti-authoritarian social rebellion, something with which I had considerable sympathy. But I was uncomfortable with the philosophical sleight of mind by which “Thou shalt not” had been transmuted into “Do your own thing.”
“Hey, that’s a value judgment,” a student would say if I were to articulate a moral critique of some aspect of our social or political scene. If I offered some judgment of some group’s life-choices, I might hear, “That’s not for you to decide for them. Everybody’s got to decide what’s right for them.” Let’s examine that proposition, I would suggest. And then we did. Morality, these students would explain, is a subjective matter. “It is not something out there. It’s just your feeling. So no one can judge anyone else’s morality. If it’s right for you, then it’s right.” I’d ask: “You mean that any moral position is just as valid as any other?” That’s right.
“Then how do you choose one moral position as right for you, and not any of a number of others?” I’d ask. “And if you really think that you moral choices are completely arbitrary, just what does it mean to say that you believe them?” Well, some beliefs just feel right, the students would say, so you just hold onto them. “For that matter,” I’d continue, sensing that while my questions posed challenges they could not meet, it was not clear they were greatly discomfited by that fact, “why choose to have any moral beliefs at all? If they are not in any way true, if there is nothing out there that obligates you to anything, and if, as is inherently the case, moral beliefs tend to get in the way of your just doing what you want for your own pleasure and advantage, why not just choose to have no morality at all?” If I ever got a satisfactory answer to such questions, I was unable to recognize it.
These were nice kids– not the kind of kids around whom you’d keep your hand on your wallet. At Prescott College, the faculty would go off on wilderness trips with the students. Asleep in the wilderness, under the stars far from any protective authorities, ensconced in a sleeping bag, in that vulnerable position that was Jack Nicholson’s undoing in Easy Rider, I never had occasion to worry that the students –who had argued in class that any person’s morality was just as valid as any others– would practice the moral anarchy their beliefs would support by pouncing on me or on anyone else to kill or rape or rob. But there seemed nothing in their moral philosophy to stop them.
Back in the classroom, the exploration would continue. “Is there nothing that you are prepared to say is morally wrong, even if the people who are doing it believe otherwise? What about the Nazis at Auschwitz, murdering hundreds of thousands of innocent human beings, gassing whole families who had done nothing against them, trying to extinguish a whole people? Was that right to do?” “If the Nazis thought it was right,” I recall a student saying one day, “it was right for them.”
In recent months, this attachment to morality as merely subjective and therefore arbitrary, and beyond the reach of meaningful interpersonal exploration, I have discovered is still alive and widespread in the circles of those who have carried forward the counter-cultural thinking of that earlier era. I read the conversations of several forums on the Internet, where people discuss issues of life and philosophy and our humanity. Heaven help –or maybe, the Force help– whoever lays his moral “trip” on someone else, by making any kind of a judgment, by suggesting that some way or being or acting is somehow not as “good” as some other. Paradoxically, it is only around “not laying your trip” on another that moral wrath seems to be permissible.
My interaction with this group brought me recently to turn to the forum for counsel. I was going to be doing a nationally broadcast radio interview to a mostly conservative audience. My topic was going to be “The ‘Sixties’ Had a Moral Vision that is Still Worth Remembering.” I hoped to speak on that radio program about the vision of human possibilities that the counter-culture worked to develop– “Be all that you can be!” as the Army’s advertisement in the seventies expressed it, co-opting that rising cultural belief in the value of realizing one’s “human potential.” I wanted to convey to my conservative listeners that it is important for us as a culture to work to integrate those values of the human potential with the “iron virtues” –duty, responsibility, sacrifice, discipline– that has become ascendant in America along with the rising tide of conservative power and traditionalism. Now, I asked the people of this New Age, counter-cultural group: How might I convey the moral vision of the human potential so that I might best touch and move a reasonably open-minded conservative listener?
My question elicited expressions of fear and opposition toward the moralism of the traditionalists. “In the conservative model,” one person complained, “evil is real.” This belief allows them to feel they have the right to struggle against what some other people do with their lives. “It’s all about judgment,” was another complaint. “They think they are in a position to make judgments about whether other people are doing the right thing. How can anyone else know what’s right for someone else?” Any morality that represented itself as other than arbitrary was regarded by this group as a weapon that could only be used wrongly and unjustly against other people.
From reading various other things these same people had posted, I saw that these folks, like my college students two decades earlier, were apparently “good people” in the usual moral terms. Their concern for their fellow human beings runs deep. Many of them work in the world to nurture other people in ways that the followers of Christ would recognize as doing good works. But their philosophy provides them no bedrock on which to rest an argument about why one should choose good instead of evil. If the idea that “evil is real,” or that “good is real,” is taken as an illusion, on what basis does one feel entitled to oppose the killing fields of Cambodia or the torture chambers of regimes around the world. If judgment” is itself never justified, are we not disabled from speaking effectively to a young man who goes around fathering children without ever concerning himself for the fates of the human beings he helps bring into the world, or of the young women whose lives he may be devastating?
This absence of bedrock can contribute to the sufferings of the world from both sides, from the direction of those who erect rigid structures upon arbitrary and therefore false moral certainties and from that of those who maintain an ungrounded fluidity in the moral realm. The theologian from the fundamentalist university –and a very sweet man he is– can proceed without a glimmer of doubt on the certainty that it is right to treat homosexuals as sinners. The New Age relativist can issue loving permission into the world in a form that can also encourage the spread of moral anarchy.
Of course, the fundamentalists and the moral relativists are not the only forms of moral thought we find in our culture today. There are other philosophic approaches to the search for moral bedrock, some of which will be considered in the following chapter. But these two serve well to frame the struggle for a valid and sufficient foundation for a viable moral vision. And they also act as signposts for a deep-seated difficulty embedded in the ways in which questions of value figure in how we understand our world. It is a difficulty Western civilization has been struggling with now for several centuries.
The chasm uncovered by Western reason.
The idea that authority cannot be questioned –even if intellectually indefensible– at least serves the purpose of eliminating all uncertainty and ambiguity. God said it. The king is sovereign. That’s the way it has always been done. No need to think for oneself.
If we overthrow authority, to where then can we turn for certain knowledge? In the West of the Enlightenment the place to turn has been to Reason. We are endowed by nature or by our Creator with a rational capacity, and from our reason we can generate reliable truth. Any discerning reader will surely know by now that in this faith in reason I, your (trusty?) guide, am a child of the Enlightenment.
But reason cannot create truth ex nihilo. Only if certain things are granted as true can reason build further truth. Even mathematics begins with axioms. (What seems self-evident in a Euclidean geometry can be rendered untrue in a geometry with other suppositions at its base, a geometry in which, say, parallel lines will meet.) Even scientific truth depends on logic, and depends also on the accuracy of observations. Without some premise, where is logic to begin? And if the premise is a given, then where is the bedrock from which we can derive certainty? Descartes’ famous “I think, therefore I am” expresses one effort to find some kind of unquestionable bedrock. It is not, therefore, only in the realm of moral judgments –what is good? what is the basis for morally right action?– that the problem of bedrock occurs.
But with the idea of moral knowledge in particular, the Western use of reason has seemed to discover an impenetrable obstacle to finding bedrock. With respect to the larger epistemological challenge –can we know with certainty that one plus one makes two, or that there is in fact a rock under my foot– I have nothing to contribute. But with respect to the search for some kind of moral bedrock, I do believe that I found some way through that ostensible obstacle.
First, let me try to describe that obstacle uncovered by the reason of the Enlightenment.
Let’s say that I declare, “It is not a good thing to murder someone.” You then ask, “Why not?” And then I answer, thinking of Clint Eastwood’s remark in The Unforgiven, “It takes away from another human being everything he has and everything he would have had in the future.” You can then respond, “Yes, I know that’s true. But there is nothing in that to tell my why it’s not good to do that.” And I can go on and talk about all kinds of things that you will grant –that the person murdered found satisfaction in living, that his death causes grief to his friends and family– and you’ll be able to say, “Yes, but how does that lead to your proposition that it’s not good?” And ultimately –at least so it appears– I will have to concede that none of my statements about what is factually true allow me to prove anything to another person about what is morally good.
In the words of an Enlightenment philosopher of the eighteenth century (David Hume), it seems to be impossible to derive “ought” from “is”. Statements about what “is” are matters of objective reality, whereas moral statements appear to be merely subjective. If you don’t feel about the grief of the murdered man’s family the way I do, it would seem, then my feeling that it is bad would seem to be just my own subjective reaction.
My efforts to persuade through reason are unavailing, according to this rational understanding, because I am attempting to put into the conclusion an element (moral truth) that is absent in the elements (statements of factual reality) from which I am trying to derive it. The Reason of the Enlightenment says that this cannot be done, that no amount of understanding of what “is” will allow me to say anything about what “ought” to be. Evaluative statements are, therefore, inescapably subjective: they express only the value that we place on what is true, but they are not true or false in themselves.
The perception of this seemingly unbridgeable chasm between objective truth and subjective evaluation led a major strand of modern philosophy –known as logical positivism– to brand as “meaningless” all moral statements. Whatever cannot be disproved, the logical positivists asserted, has no real meaning. The assertion that “Murder is bad” tells you only about the speaker’s merely subjective attitude about such killing; as a statement about murder itself, apart from the subjectivity of the speaker, it tells you nothing whatever. The value a person places on things is only subjective, and has nothing to do with reality.
One can readily see how the moral relativism of my college students and my counter-culture Internet interlocutors fits together with such philosophical reasoning. If everything a person thinks about what is good and right is “merely subjective” in the way the positivists deduced, then there is no bedrock, no basis for asserting that any one moral judgment is better or truer than another, that any of my moral beliefs can have any standing stronger than “true for me.”
That’s the challenge in the search for bedrock. How do we get past the is-ought barrier? How can our conclusions contain a moral dimension that is missing in our premises? How can the evaluative dimension –having to do inescapably with how we feel about things– be anything other than “merely subjective”? How can a statement about what is good be true, resting on real bedrock, and not simply a private feeling masquerading as a general proposition?
The young man we left in his Berkeley apartment, his moral foundations shaken, wrestled with these questions. And I think he found an answer. It’s still “true” for me, but by true I mean a lot more than true just for me.
The emergence of value.
I didn’t fight in Vietnam, but during those years I was involved in a war nonetheless. What that war produced was a book, eventually published as The Parable of the Tribes: The Problem of Power in Social Evolution. That work represents the fruit of the profound alienation I felt from civilization at that era of my life. (Like most wars, it was a war for a young man to fight. When I think now of all that I took upon myself to think through and work out to write The Parable of the Tribes, it feels as over-ambitious to my fifty-year-old body as the way, when I was in my teens, I used to jump over picnic tables and fences just because my legs felt the impulse.) And one of the key pieces I felt called to articulate was a fundamental flaw I discerned in the positivist rejection of “values” as meaningless and mere subjective idiosyncrasy.
At the time, my thinking was deeply entwined in an evolutionary perspective. How things develop seemed to offer the key to understanding. I was aware of myself, and of my fellow human beings, as being borne along by two evolutionary currents operating simultaneously, but not harmoniously. One was the current of biological evolution, which had created us; the other the current of the evolution of civilization, which as I then saw it, was spinning destructively out of control. My allegiance was clearly with the biologically evolved systems that created us, that designed our inherent needs and nature.
The specific task that impelled me to break through the chasm of is-vs.-ought, fact-vs.-value, had to do with buttressing the standing of the theory of the evolution of civilization I was then developing. On the one hand, I was offering an explanation of what had happened to our species. On the other hand, I was also determined to subject the social evolutionary dynamic I was describing to a powerful moral critique. The validity of that moral critique was impressed as deeply on my soul as the truth of the more “objective” explanation. But for that moral critique to be valid in the way I thought it was, there had to be some bedrock in which the difference between the good and the not-good is real.
It was in contemplating the unfolding of cosmic evolution that I finally saw it. In the scientific account, in the beginning there was a lifeless universe. This is not, of course, the only way of looking at cosmic evolution. In addition to the view in which there is a Creator behind the whole process, and whose presence makes the universe not lifeless, there are those who believe that everything that exists has a degree of life and consciousness of some sort. But the scientific perspective is useful here, as its very coldness helps to illuminate how “ought” emerges from “is.”
In this presumed lifeless universe, over billions of years, something new emerges in the cosmos: life, and not just life but eventually creatures that experience their existence with feelings, with joys and sufferings and needs and terrors. Of especial interest to me in the context of the argument in The Parable of the Tribes was the way that the evolution of life had structured those feelings and motives in the service of life’s perpetuation, that the most fundamental choice designed into biological systems and creatures was that of life over death. Each creature is built to seek the perpetuation of its life, or more comprehensively, the life of its kind, over death.
Even in a purely mechanical Darwinian view of life’s development, “therefore choose life” (Deuteronomy, 19:30) is the inherent bias of the process. What survives is chosen by evolution to contribute to the design of the future. If the inherent bias of the evolutionary system is life, so also are the creatures designed to make the choices that are conducive to survival. “Choices” may begin as purely mechanical reactions, virtually at the level of chemistry. But soon, an experiential dimension enters into the choosing process, and therefore into the equipment for survival.
As sentience emerges in the evolutionary process, creatures are designed to place value of a positive or negative valence on various experiences according to whether such experiences have evolutionarily been helpful or hurtful to the chances for survival. Eating nourishing food is satisfying because this has served life. Sex gives pleasure because it has perpetuated life. Fear feels distressing and it serves life by driving us to protect ourselves, either by fleeing or by disabling what might harm us, this eliminating the unpleasant feeling. Cutting our flesh causes pain, which we’d rather avoid. And so on.
In a lifeless universe, in which there is nothing to experience one thing as being in any way preferable to any other, there is no question of value. No outcome is “better” than another. Whether a star explodes or not, whether a comet crashes into a lifeless planet or not, makes no difference. To whom would it make a difference?
Life changes that. If all living creatures were without experience, without feeling, indifferent to one eventuality as opposed to any other, then the universe-with-life would be as indifferent as was the lifeless universe. But that is not the way life has developed. Value has been designed into sentient creatures –such as human beings, but obviously not only us. Indifference is gone. From the point of view of creatures that experience, some things –experiences, outcomes, courses of events– are good and some are bad. Value is necessary for creatures that behave, for value is the basis of choice, and a creature that does not experience value will not know how to choose: do I eat the fruit or the rock? am I attracted to the female of my species or to a tree trunk? would I rather be stroked or bashed on the head? And if a creature does not know how to choose well, its design will not survive to help construct life’s future.
Into the indifferent universe that science describes in the early stages of cosmic evolution have come creatures who, by their very nature, are not indifferent. They –we– are built to experience things in terms of positive or negative value. “This is good, this is not good” are vital qualities of felt experience. Those qualities are as real as life; their reality, indeed, is essential to the life of sentient creatures, to the meeting of their needs.
This kind of valuing –at the level of preferring to eat fruit and not rocks, and choosing to be stroked rather than bashed– is a couple of steps from yielding moral judgments (those steps will begin the next chapter). But we are not at present looking for morality yet. We’ve been seeking to know what is demonstrably good in itself, on what judgments of goodness can ultimately find a resting place. Our search has been for that bedrock, that foundation without which any morality will seem ultimately arbitrary.
The question was: what can we say is good without having to go further to answer, “What’s good about it?” And the answer, I propose, is now at hand. Goodness (along with its opposite) is an inherent quality of the experience of sentient creatures. The positive or negative quality of their experience matters in the only way anything can matter. Whether a dog is frolicking with delight in a mountain stream or whimpering with a mangled paw is a matter of better or worse because there is a creature who feels them in terms of better or worse. What is good? Experience that feels positive to a sentient creature (which tends to be experience that the design of that creature leads it to want, which tends to be experience that meets the needs of that living being). What is good about that feeling-good experience? It just is!
It just is. This is the one place, as I see it, that this answer is not arbitrary. It is in the realm of the quality of felt experience, I am proposing, that all meaningful statements about value must find their bedrock. In saying this, I am making two claims– both arms of the “if and only if” formulation. First, that the fact of felt experience having a quality of welcomeness or unwelcomeness to the creature experiencing it does confer a bedrock goodness or not-goodness to that experience. And second, that no declaration that “this is good” that does not have ramifications ultimately in the realm of the experience of sentient creatures can be meaningfully valid. Both these points will be revisited.
Locating the bedrock of goodness in the quality of felt experience is not necessarily to adopt a simple hedonistic measure of goodness, where good is equated with pleasure and evil with pain. That may or may not be adequate to describe the way value is experienced by mice (and those concepts may or may not apply meaningfully to protozoa). But it seems to me that the complexities of human nature are such that to speak of the quality of felt experience requires one to go far beyond the simple concepts of pleasure and pain. So central in the nature of human life are such matters as the quality of our relationships with one another and of our relationship with ourselves that it would seem that our felt experience as human beings is governed less by such “pleasures” as gourmet meals and soft sheets than by such components of the satisfying life as being loved and feeling worthy and finding meaning.
If the experience of goodness has been elaborated and deepened in this way in our species, it is not just happenstance. Rather, like the earlier development of the animal capacity for physical pleasure and pain, the crucial human components of the quality of felt experience have grown out of the evolved deepening of animal life into human culture, with its more complex ways of regulating behavior in the service of the perpetuation of life. Even the transcending of hedonism’s simple calculus in the human experience of value, therefore, has been crafted into our very nature.
With the unfolding of sentient life, then, something new –value– has come into existence. The indifference of the lifeless and value-free universe postulated by rationalist science has been overcome by the emergence of creatures who are not indifferent. The dimension of “goodness” has emerged as part of the structure of a kind of entity that had not existed before.
And it is here –the place where we can say “It just is”– that we begin to discover the illusion that made it seem that there is a chasm between “is” and “ought,” between reality and value. Premises without value were thought incapable of yielding conclusions containing evaluation. By a kindred logic, one might argue that a lifeless universe could not develop into one that is alive, a universe in which all is indifferent could not unfold into one where things matter. But that, of course, is precisely what positivist science says has happened. One speaks of the evolutionary concept of emergence, in which some new category of reality that was missing in an earlier stage has appeared in a later stage. The pertinent new category here is consciousness, or more particularly, experience with a felt evaluative dimension.
The evaluative dimension disappeared from our rational accounts of reality, and thus became regarded as not really real, because of the particular understanding of reason that modern Western civilization developed. Modern rationality is defined in terms of objectivity, and objectivity looks at things from a vantage point “out there.” But what emerged in the evolution of life is an experiential space “in here,” and it is from the realm of internal experience that evaluation inherently and inevitably derives.
No wonder our reason concluded that evaluative statements can have no basis in reality! If the quality of experience from the inside is defined apart from what really “is,” then it would follow that “is” can tell us nothing about “ought” or anything else evaluative. Should I hit my thumb with a hammer? Objective rationality can tell me what will happen to bone and blood vessels and thumbnails, but it cannot tell me whether it’s a good idea. Ultimately, it is the quality of my experience that tells me that it’s not. I would argue that this quality of experience, that the nature of our feelings, is an important and natural part of the “is” that a rational Reason would regard as real and take into account in its assessment of validity.
But instead, in our modern rational worldview, our feelings are regarded as “merely subjective,” meaning ours alone, idiosyncratic, incapable of weighing persuasively in any interpersonal discussion of what is true. But to regard our subjective realm as merely idiosyncratic is to misunderstand the crafting of our natures. I have hit my thumb with a hammer before, and the experience has taught me that it’s something I ought to avoid doing. But there’s nothing idiosyncratic about the quality of my experience. It’s pretty much the same for anyone with a human thumb. At their root, the evaluative structure of our feelings is as much a property that we share with one another as our anatomy, and for the same reason: the evolution that crafted us to have hearts and livers –and thumbs that hammers can smash– crafted us also to have the same evaluative tendencies. But Western rationality has treated anatomy (something dissected from the outside) differently from feelings (something experienced from the inside).
Of course, it is not only because our feelings are experienced from the inside that we see them as more idiosyncratic than our anatomy. Our feelings are significantly molded by our experience, and experience varies from life to life and from culture to culture. There is more idiosyncrasy in the domain of the evaluations we place on things than in the anatomical structures of which our bodies are built. But the role of learning in shaping the structure of our evaluative feelings does not eliminate the reality of a significant inborn component to that structure, a meaningful component that we have in common.
The importance of learning does not eliminate the reality of shared human nature in other areas. Consider the way we move along the ground. We walk. Of course, one person’s walk is not the same as another’s. You can often see a little boy unconsciously learning to walk like his father. And I’ve observed that one can distinguish the walk of Americans from that of Europeans. But none of these idiosyncrasies –from individual to individual or from culture to culture — alter the basic underlying reality that we as a species are by nature bi-pedal walkers. So also is our sense of what is good, although it can be greatly molded by individual and cultural learning, not completely idiosyncratic.
It is widely known that “Beauty is in the eye of the beholder.” By which it is generally implied that it is “only” in the eye of the beholder. What is not so well recognized is that each beholder’s eye has been crafted by an evolutionary process with an inherent allegiance to life-serving values, and that the experience of beauty is part of the shared truth of having a human eye, just like the retina and the cornea. In recent years, a variety of cross-cultural studies have disclosed that all across the planet, when members of widely different cultures view pictures of faces of people of very different racial types, there is substantial agreement about which faces, within each of the racial groups, are the more beautiful.
As with what is beautiful, so also with what is good. “Goodness,” we might say, “is in the heart of the beholder.” But it is not “only” there. We know in our hearts that the good-feeling of sentient creatures, and thus also the meeting of their needs, which engenders those feelings, is good. Within that wide boundary, there is plenty of room for us to disagree and dispute, based on our learned ways of thinking and feeling. Just which of our needs as human beings is more fundamental? Which good feelings should be given greater weight? How should the needs of different people, when they come into conflict, be adjudicated? How important is the experience of non-human living creatures in relation to that of our own kind? All these are questions of great importance. But for the present discussion, those questions are set aside to underscore the breakthrough to bedrock that we do know something about what is good, and that it is part of the “is-ness” of being a human being to know it.
As with the pictures of the faces, let us do a thought experiment with mental pictures. Picture first a world in which families are loving, people deal with one another with compassion and honesty, they are healthy and well-fed, they richly develop their full human capacities to think and feel and create, and they tend lovingly the rest of the living system. Next imagine a world filled with violence and misery, people are tortured, they lie and steal from each other, children are beaten and neglected, pestilence and famine are ubiquitous, and the biosphere is gasping from the careless exploitation of natural systems. I would wager that all kinds of people –Biblical fundamentalist, New Age spiritualist, secular humanist, corporate mogul and streetsweeper, Muslim and animist, Taoist and Hindu– all of them would agree that the first of those pictures is a better state of affairs than the second. And I would say that there is nothing “arbitrary” about that judgment. There is real goodness represented in the first picture that is missing in the second.
What if one person, or even a whole group of people, disagree? Does that lack of unanimity about what constitutes goodness send us back to square one? I’ve said that the good-feeling of a sentient creature is good. Why? Because “It just is!” Can the existence of the unpersuaded prove that my bedrock is only arbitrary after all?
I don’t think so. Work I’ve done in the mental health field certainly made me aware that people’s relation to what I have called our “natural system of values” is not necessarily intact. Most people would agree that it’s not a good idea to smash one’s thumb with a hammer, but sometimes people will get into a frame of mind in which they will perform self-destructive acts to their own bodies and inflict real pain. But that does not disprove the existence of a natural value system in the organism that leads in the other direction.
What it proves is that experience can twist us, turn us against ourselves, drive us to seek not what is good but what is evil. People can worship Satan, can derive pleasure from pain, can find the picture of hell-on-earth “better” than that of heaven-on-earth. People can also bound a young girl’s feet so tightly for so many years that when she is a woman she cannot be the bi-pedal walker she was designed by nature to be, and she will suffer great agony from the thwarted and distorted growth of her feet. That does not change the reality of the design, and the further reality that it required something not-good to happen to the growing organism to create the distortion. If we looked deeply into the development of anyone who would choose that picture of hell-on-earth, I would propose, we would find some powerful forces that had bound and twisted the human spirit into a distortion of its natural self.
Perhaps all of us have had our spiritual “feet” bound to at least some extent, and thus we may all have some ways in which we have learned to turn away from what is good. But just as most of us are healthy enough to walk, so are we healthy enough to be able to discern good from not-good if the contrast is dramatic enough, as in the pictures I presented. We recognize the goodness of having sentient creatures happy and thriving, even if we may have been disabled from appreciating some particular aspects of the flowering of a human being whose needs are deeply met.
That our knowledge of goodness is incomplete, that we may not agree on the details of what matters most — these are less important than that single bedrock breakthrough: we know something about goodness that is true.
With that knowledge, we are prepared now to turn to the question of moral conduct.
This article is one of the most astounding I have read about self and the struggle to maintain–Good or Evil–how is one to know at this stage? You explained it very well. It seemed to fit the ongoing-sturggle I am facing and one of the most poingmant I have read.
Patricia Baldwin
Andy, I suggest that you check into the website: http://richarddawkins.net/ for the Darinian athesist view of life. Science holds the answers but, for those that are still hungup on the “God Delusion” read Dawkins book of that title.
I spent many months in my younger days with the fundamentalists of Kentucky, snake handelers and strychnine ingesters, truly folks with deep fundamental beliefs but, with little education to see the other alternatives that are abroad in the mental life of man. I felt great empathy with those folks but there was no way to encroach on their beliefs or to change their view of reality. They of course felt that their task was to change my view of reality. I was open but, not available to that level of world view.
I personally believe that Education is the only salvation that is available for the world we inhabit. From the suicide bombers with their distorted perception of religion leading to their religious suicidal fanaticism to the Applachian fundamentalist view of the world from their unenlightened, uneducated tragic perception of spiritual reality, the world as we currently inhabit it is in some fairly deep trouble.
Fear of death seems to be the basic foundation of most religious institutions and that all attempt to provide answers for that most human quandry. Personally, the answer for me came when I attempted to answer the question- What do I personally recall or remember for the time before I was born. Well, in my case I would have to admit I remember nothing. That is what it will be like after my life ends. There will be nothing. Is that a problem? Only if it was before I was born! That for me is a satisfactory answer. If it were only that simple for others the world would indeed be a much more satisfactory place for our genes to move forward into the future.
Dear Andy, I spent the night reading your posted first chapter, and I am too tired to comment further at this time. (I am a very slow reader, and I need to process what I have learned as well, and file ‘different stuff’ both for myself, and for later commenting.)
But this is what I want to say now.
What you have written is so very great. Is this judgment based on my subjective experience? No, it’s not the experience of pleasure in reading what you write, but I know from experience, and based on experience, that what you wrote is good/great, and the truth, as I can only fully learn what I already know from experience, or can otherwise build on based on prior knowledge and experience.
I can learn from anything and anyone, and from good and from bad, but what you wrote I cannot learn from just anyone, and it is greater than what I knew or had words for; it is organized in a way I cannot do, and communicated in a manner which is a gift to the world.
Anyone can teach but that does not mean the student will learn.
Very grateful to you; until later. Katrin
Bravo, Andy, for an incredible piece! I look forward to the next instalments.
I happen now to be reading your “Out Of Weakness: Healing the Wounds that Drive us to War.” Finally! And what a masterpiece it is. It is also helping me to understand better the above essay as well as many of the implications of “The Parable of the Tribes.” There is so much integrative thinking in your writings.
I could offer profound counter points to such as this, or I could question the competence of the psychiatrist treating the patient, but I will offer only the typical ‘conservatives’ reaction.
We used to play a game years ago called yatze putting dice in a cup, shake it and play what comes out.
What if we could cube up the human brain, put the cubes in whatever, shake it up and then ‘play’ what comes out ? ? !
Andrew,
First…thank you for your generosity…previous blogs and all….wow! The definition of right and wrong aside, to get humanity to a place where we do right by each other and have good-will toward all, becuase it is what is right, and not becasue of the threat of Heaven or Hell…well as a great philosopher once said…imagine!
Second…I have been thinking similar thoughts …that humanity has a reached a point in our evolution where we need to have this very discussion on a global scale. The same old pardigms are no longer valid.