Aryeh: David, old friend. It has been too long since we've met. I
really miss our discussions, and we should try harder to find time for them.
D. I miss them too, but I don't think we can try any harder. Our
effort, like everything else about us, is completely predetermined.
A. Once upon a time, I'd have taken your bait - you know how
strongly I believe in free will - but not since our last discussion. Now I
know that you don't believe in determinism any more than I do.
D. What do you mean? Of course I believe in it.
A. Maybe we're miscommunicating here. When I say you don't
believe in it, what I mean is that you don't believe it's true.
D. Of course I believe it's true.
A. Then you've either changed your mind or else you're
contradicting yourself.
D. Or else you didn't understand my point last time.
A. I suppose that's also a possibility, but I don't really believe
it. I thought you were being perfectly clear.
D. Well, humor me for a minute so I can satisfy myself, okay?
A. Sure.
D. When you said that you didn't believe you had misunderstood me,
did that also mean that you believed you had understood me?
A. It's not a necessary implication, but in this case it is a true
one.
D. So for our purposes you believe you understood my point last
time?
A. Yes.
D. And when you say that, you mean that you believe the statement
"I understood your point last time" to be true?
A. Yes - but I see where you're going, and I really think this is
just going to confuse the issue.
D. Maybe, but let me finish before we deal with your objections.
Now we've agreed that you believe the statement "I understood your point last
time" to be true. But you also said that it's possible that you misunderstood
me, right?
A. Yes.
D. And when you said that, you meant that you considered it
possible that the statement "I understood your point last time" was false,
didn't you?
A. Yes, but you're conflating psychological ambivalence with
logical contradiction. But never mind - let's assume you've analyzed my
statements correctly. I assume you'll agree that the contradiction as we've
defined the terms is absolute, and that the only real solution to my
self-contradiction is to change our definition of belief.
D. Why?
A. Well, you just pointed out that I said the same statement was
both true and maybe not true.
D. Yes, but the maybe not true is on a different level. If you
didn't get that, then I believe you really didn't understand me last time.
A. Maybe, but let's leave that alone - I don't think you
understood me this time, by the way, but let's concentrate on understanding you
this time. What do you mean by "a different level"?
D. All the evidence led you to conclude (incorrectly) that you
understood me. The only thing was, as a reasonable human being you recognized
that your interpretation of the evidence might be wrong, so you acknowledged
the possibility that your conclusion was wrong.
A. If I thought my analysis might be wrong, then I didn't really
believe I understood you - I just thought it probable.
D. Not at all - your belief in your conclusion was absolute. It's
just that your belief in your belief isn't absolute, as is only reasonable -
you must know that some of your past beliefs have been erroneous, and that
other people have equally reasonable beliefs that contradict yours.
A. You're making two different points and they're problematic for
different reasons. The first point seems to undercut any possibility of real
conviction, since every belief is subject to metadoubt. I don't understand how
separating levels solves this problem - in the end, you could only respond to
the true-false question with a probability qualification.
The second point you make is even more disturbing, since it
implies that there's no basis ever for principled disagreement. Anytime
someone holds an opposing belief as strongly as I hold mine, metadoubt kicks in
and tells us both that we have no epistemological basis for choosing our own
opinion. Clearly you can't allow epistemology a place in the discussion.
D. That's exactly my point. You can't allow epistemology a place
at the same table, but it would be dishonest to banish it completely.
A. Let's accept that for the moment, and see what its implications
are. Doesn't your argument really apply to all decisions? After all,
universally held beliefs have also been demonstrated incorrect over time.
D. True.
A. So then how do you make any decisions?
D. By ignoring epistemology, of course.
A. So then you behave as if you were completely convinced that
your beliefs are true, right? So all this fancy philosophic discussion is
completely irrelevant.
D. Not exactly. I behave as if my beliefs are true except when
they conflict with other people's right to behave as if their beliefs were
true.
A. What's the basis for your distinction?
D. Obviously, the same point I made earlier. I believe that my
moral system is correct, so I behave in accordance with it. But when my moral
system is confronted with another, when my choice will prevent someone else's
choice, I find I don't believe my system more correct than his or hers.
A. I don't understand. I f you don't believe that your system is
more correct than, say, Pol Pot's, on what basis do you ever act in accordance
with it?
D. My turn to put in a caveat - you're obviously trying to lead me
into the "What would you have told a German soldier in WW II" question, and I
don't think it's fair to slip it in tangentially. We'll deal with it in due
course, but for now let's discuss what happens if my ethics clash with Albert
Schweitzer's.
A. Okay. Discuss away.
D. Let's grant that there's no epistemological basis for
preferring one set of ethics subjectively arrived at over another.
A. Granted.
D. Let's further grant that all human beings pursue what they
perceive as the greatest good.
A. That one's not as simple. You're assuming that there's a
unitary self which makes the decisions relative to pursuing, and that everyone
recognizes that the good is worth pursuing. In other words, you're assuming
that people who pursue personal pleasure rather than doing what you perceive as
good disagree with you about the definition of good rather than about whether
the good is desirable. I'm not sure the distinction is purely semantic.
D. I'm not sure either, but let's work with the assumption that it
is.
A. Okay.
D. Let's further grant that it seems inherent in the human
condition that differnt people will have differing conceptions of the greatest
good.
A. Granted.
D. The question then arises, how can one best reconcile these
differing conceptions?
A. I'm not at all clear why any question of reconciliation
arises. It seems to me that the question which arises is "how am I to relate
to these people who don't share my moral system?".
D. But then you're likely going to kill them as obstacles to the
triumph of right!
A. Not necessarily. My moral system may place a very high value
on individual human lives.
D. Yes, but what will you tell someone whose value sytem doesn't?
A. I thought we agreed to postpone that discussion.
D. You're right, we did. Let me try to make my argument
differently. What I want is a system that will allow everyone the maximal
opportunity to pursue their conception of ultimate good.
A. Why not a system that will allow them maximal opportunity to
pursue your conception of ultimate good?
D. Because I have no right to impose my conception of right on
them.
A. Why not?
D. Because they don't see it as true, and I don't believe that my
conception is more likely right than theirs.
A. Then why do you follow your conception when it doesn't conflict
with others' ability to pursue theirs?
D. Because I believe it's true. I know, you don't understand how
I can say that. Let me try two new ways of presenting the argument. I'm not
sure I agree with them fully or accept all their implications, but I think
they'll advance the discussion.
I think one possible way is to start with the question of why I
think it's worthwhile to act in accordance with my conception of the good, why
I ever thought it was useful to investigate the notion of the good. It seems
likely that I have a metavalue that pursuing the good is worthwhile, and it
seems furthermore that this metavalue is universal. Accordingly, it provides
the basis for an onjective ethic.
A second possibility is to say that even if my opinion of the
right is correct, it's correct only because I got lucky - there's nothing
inherent in me which enabled me to reach this conclusion while so many people
got it wrong. It wouldn't be fair if I could compel other people simply
because they were unlucky. Any ethic, then, must be based on what I would be
willing to live with even if I weren't sure I was lucky (although it happens
that I am)
A. Both those arguments are interesting, although I share your
suspicion that you won't like some of their implications. Let's take them one
at a time, though, because they deserve independent treatment.
D. Okay.
A. Your first argument, as I understand it, assumes that we will
regard any universally shared ethic as sound. I'm willing to grant that,
although of course universality is likely to ignore a few exceptions, and such
an argument is of course definitionally useless in convincing somebody who
doesn't already agree with you. The next step is to argue that the value of
pursuing the good is universally shared. i.e no one ever acknowledges that
pursuing evil is better than pursuing good. This is a bit trickier - as you
know, Socrates devoted much of his philosophic effort to demonstrating it, but
never succeeded, and modern Satanists seem to have some objections of their
own. I think the only way to really defend this assumption is by making it
tautological, by claiming that the subjective good is that we think we ought to
pursue. But even this is questionable, and furthermore has very dangerous
ramifications.
D. Let's discuss the questions first, then the ramifications.
A. Fine. What do you do with someone who denies the whole concept
of ought?
D. That is a problem. I suppose I'd have to argue that no such
people really exists no matter how much they protest, but that's really your
type of argument - I don't like it at all. Also, I'm not really sure that a
value is universal if people even believe they don't believe it. But let's go
on anyway and do the argument comprehensively - maybe this problen will be
worth tolerating in a broader perspective. Or maybe not - you were saying
about ramifications?
A. Well, if you define the good as whatever people think they
should do, you're really being a total relativist.
D. Not at all. I'm just saying that everybody agrees that they
should pursue the good, not that their concept pf the good has any validity.
A. Let's leave that for now and go back to assumptions. The third
assumption your argument makes is that since we all agree on this metavalue, it
takes precedence over all the values we don't agree on. I'm not convinced -
maybe the agreed upon value is the lowest common denominator, and really
insignificant.
D. You're right - this argument doesn't work on its own. Let's go
on to my other suggestion..
A. This one is really tricky. What you're arguing is that there's
a metavalue called fairness which prevents me from imposing my other values on
anyone else, i.e which prevents me from behaving at all times as if my other
values are true. This is very different from your earlier uncertainty
claim.
It seems to me that here you're assuming, as always, the determinist
viewpoint. In other words, you're assuming that circumstances of birth and
education absolutely determine someone's capacity to accurately perceive the
good. If, on the other hand, you grant free will and an innate, although far
from all-powerful, capacity for such perception, then everyone has a chance -
indeed, meeting you might very well be their best chance. I think I can
satisfy your fairness criterion so long as everyone has a chance, even if not
an equal chance - in a worst-case scenarion I could argue for a benevolent god
who matches reward with degree of difficulty.
Secondly, I'm not at all clear on why fairness, even if a
universally accepted metavalue, is neessarily more significant than other
values. Perhaps its simply a lowest common moral denominator of little overall
significance.
D: It does seem as if everything does ultimately depend on the
"What would you say to Pol-Pot?" case. I think it's time we dealt with it
directy. I'll start by defining what I see as the issue between us.
It seems clear that we would give very different responses in
that case. You would tell Pol Pot that he was wrong and should recognize the
evil of his ways, while I would tell him that he can't be sure he's right. I
think that these differences reflect different goals. I want to convince Pol
Pot to stop, and so save poeple's lives, while you want to be able to convince
yourself (and others) that Pol Pot should stop. and thus save your soul.
A: That is a neat way of putting it, although I think a little too
neat. For instance, you assume that your argument is more likely than mine to
convince the Pol Pot types. We'll debate that later, but I'd like to start by
asking you whether you really don't care about what you define as my interest.
Have you really no concern for your soul?
D. That's a hyperbolic, and I think unfair, way of putting the
issue. I could ask you just as easily whether you have no concern for human
life. Obviously, the question is not about whether the other value has any
significance, but rather about which has greater significance. And I think my
position, that other lives are more important than my soul, is more tenable
than the alternative.
A. You're right. At the same time, though, I still feel that my
impression of absolute rather than relative concern was well founded - you
really don't care much about your soul.
D. Or maybe I'm just not worried about my soul.
A. You mean you're confident that it's safe?
D. Sure.
A. I suppose that's because you're convinced of the supreme
importance of fairness.
D. Or at least that G-d will be fair, and fairness was a plausible
value to base my life on in the absence of personal revelation.
A. Personal revelation?