Some Speculations about Practical Implications of the Limits of Reason
The Limits of Reason in Everyday Life
Mathematics as a (contingent) Science?
- We have described the great power of formal reason, but also
identified a number of limits:
- Godel's incompleteness theorems: for systems of
reasoning of sufficient strength, there are unprovable
- The Halting Problem: there is no effective
procedure to find all the effective procedures (reason
cannot find the precise limits of reason).
- Complexity Incompleteness: generally, a theory
of sufficient strength cannot decide questions of
complexity substantially exceeding the complexity of
- Are final question is: Do these limits every have
practical implications? Do we, or will we, ever run up
- One caveat that needs to be considered is that it is not
clear that we will notice when we hit such a limit: we may be
blind to some limits because we see problems in terms of what
we can do.
- But, we can plausibly describe some potential limits that
could arise. Here are some speculative questions; these are not
meant to be definite queries so much as illustrations of how we might
start considering how the limits of reason could have
- Have we blurred some distinctions between math
- Does complexity tell us anything about consciousness?
- Do the limits of syntactic/formal methods reveal
a flaw in the computational theory of mind?
- Must an ethical rules be comprehensible?
- What does this mean for reason, and for the special case of
reason par excellence, mathematics?
- One way we can interpret the limitations that we've seen is
that we will have to add information to our theories whenever
we are trying to reason about things more complex than our theory.
- If you cannot prove P, but P looks likely or at least very
useful, you may just need to assume P and work with it (perhaps
abandoning P later if it turns out to lead to contradictions).
- But this blurs the lines now between pure reason and some
of the fallible aspects of empirical science! Mathematics so
practiced has strong analogies with science: we
assume some things and work with them because of their power
to explain, but are open to revision in the future! Our theories
become pragmatic and fallible!
Applications to the Philosophy of Mind I: Consciousness and
Anyone interested in a slightly more rigorous version of the
argument I gave (or will give) in class might want to read
Physical Theory and the Complexity of Phenomenal Experience.
Applications to the Philosophy of Mind II: The Lucas Argument
- The philosopher Lucas first proposed an argument that has
recently become widely known (in a differnt form) because of
Roger Penrose. Below is something like Penrose's argument in
The Emperor's New Mind.
- Recall Godel's First Incompleteness theorem.
- Godel showed us that we could in arithmetic with
multiplication create a well formed expression
that can be inpreted to say "I am not provable."
Call this G.
- Such an expression is either provable or not.
- If G is provable, then the system is inconsistent.
- We typically like to assume arithmetic with
multiplication is consistent. Hence, assuming that,
G is unprovable.
- If G is unprovable, G is true.
- Penrose makes a distinction between "formalist" truth, and
truth (note: a formalist may deny this
characterization). Formalist truth is what we would have in
mathematics if we adopted one kind of "formalist" view. True
would mean provable (in some system T).
- But G is true but not provable. If we adopt the formalist
truth notion, then G is not true. But, Penrose observes, it's
obviously true. So, we must abandoned the formalist truth notion.
The result is that truth is something beyond the reach of mere
- Penrose claims we see in this case a mental capability
that is beyond the reach of formal methods. If he is right,
then the claim that we could make a thinking Turing-equivalent
machine (or the claim that our minds are Turing-equivalent)
- This extra ability, he says, is already evident in for
example how we first go about setting up a mathematical system:
we start with axioms that are obviously true, he says -- not
true in some other formal system.
- Penrose also suggests that the difference here is one
between syntax and meaning. He thinks there may be some analogy
with Searle's Chinese Room argument.
Ethical Rules and Comprehensible Justice
- From Kafka's The Trial:
"We are humble subordinates who can scarcely find our way
through a legal document and have nothing to do with your case
except to stand guard over you for ten hours a day and draw
our pay for it. That's all we are, but we're quite capable of
grasping the fact that the high authorities we serve, before
they would order such an arrest as this, must be quite well
informed about the reasons for the arrest and the person of
the prisoner. There can be no mistake about that. Our
officials, so far as I know them, and I know only the lowest
grades among them, never go hunting for crime in the populace,
but, as the Law decrees, are drawn toward the guilty and must
then send out us warders. This is the Law. How could there
be a mistake in that?"
"I don't know this Law," said K.
"All the worse for you," replied the warder....
"You'll come up against it yet."
Franz interrupted: "See, Willem, he admits that he doesn't
know the Law and yet he claims he's innocent."
"You're quiet right, but you'll never make a man like that see
reason," replied the other.
- Must we be able to understand ethical rules, or the criteria
that are used to make an ethical evaluation, in order for these
to be just?
- Two plausible kinds of real world cases.
- Connectionist networks to recognize patterns in
- Connectionist networks to recognize potential
security threats ("Total Information Awareness")
- It is important to understand that connectionist networks
a. Trained, not directly programmed.
Because of these features, the internal "rule" that a connectionist
network utilizes might be irreducibly complex.
b. Best understood using multidimensional state
- Suppose now we have an accurate tool to recognize credit
risky or security risky individuals, but that it is irreducibly
complex. We know what the inputs are, but you cannot explain
how it decides in any more simple way than just describing the
a. Could it ever be just to deny someone credit, or a space
on a plane, using such a rule? How accurate could need such
a rule be before we accept it?
b. Should we allow rules that (because they are very
Kolmogorov-Complex) we cannot understand, and to which we
cannot mount any objections? (Consider Kafka's The
Trial -- could that be a just situation?)
- Some preliminary thoughts
a. If we are realists about ethics, and think that
there really are such things independent of our
understanding as good, evil, and so on, then perhaps
we must allow that a situation like that described
above could be both incomprehensible and morally
b. If we decide we must restrict ethical decisions to
what can be comprehended, this would be a very new
kind of restriction on ethics. Many tasks arise as a
result. For example, we need to define what can be
comprehended (no small task!) and we need to explain
how incomprehensible rules can be used, if at all.