PHL310 Valid Reasoning II, past assignments

Past Assignments
27 January
I'd like to see what system you learned, and where you are at. Five problems, just for me to see. The first two are just translations into some kind of formal logic. For problem 1, use a propositional logic (the smallest things in your language will be sentences). For problem two, use a quantified logic with predicates. For 3-5, generate a proof (a syntactic proof, not a truth table). Use whatever rules or proof methods you remember.
  1. If both Nemo and Jaws are fish, then neither Patrick nor Mr. Krab are.
  2. All men are mortal.
  3. Premises: (P → Q), (R → Q), (P v R). Conclusion: Q.
  4. Conclusion: ((P → Q) → (¬Q → ¬P))
  5. Premises: ∀x(Fx ↔ Gx), ∃xFx. Conclusion: ∃xGx

3 February
Here's a homework with two tracks, since some of us are catching up and some remember the material well. If remember the propositional logic, do (or at least try) track 2. Else do track 1.

1. Read chapters 1 through 5 of A Concise Introduction to Logic, and answer the following questions from the book:
  1. Chapter 2 question 5.
  2. Chapter 3 question 1a and question 2a.
  3. Chapter 4 question 1b, d, f, h.
  4. Chapter 5 questions 1 and 2d.
2. We can make use of a theorem -- or of a special sentence called an "axiom" -- in the following way. If you replace every occurence of a sentence in a theorem or axiom with a different sentence (but the same replacement in each case), you also get a theorem or axiom. So, if ((P → Q) → (¬Q → ¬P)) is a theorem, you can replace P with R, and Q with (S v T) and get the following theorem: ((R → (SvT)) → (¬(SvT) → ¬R)).

It turns out that we can do away with indirect derivation if we allow ourselves certain sentences as axioms (an axiom is a sentence that we assume--it's like a very special premise; but we treat it the way we treat theorems in that we'll allow versions or instances of it).

Prove the following:
Premises: (R → S), (¬S v ¬R).
Conclusion: ¬R.
But prove it without using indirect derivation, but rather a direct derivation and also by using the following (((P → Q) ^ (P → ¬Q)) → ¬P) as your axiom. (If you feel stuck, first prove it using indirect derivation, and then look at your proof and think of how you can use that axiom....) The way to use the axiom is: any time you like, you can write down either (((P → Q) ^ (P → ¬Q)) → ¬P), or you can write a sentence that is made from that by replacing each P with some sentence, and each Q with some sentence. Your justification on the right is "axiom".

Try this one if you feel ambitious. Using that axiom, can you prove without indirect derivation:
Premises: (P → R), (Q → R), (P v Q).
Conclusion: R.
You asked for hints. They're not that hard, actually. Consider first whether you might not just get the conclusion from the axiom and modus ponens. In that case, the instance of the axiom you'd want for the first problem is:
(((R → φ) ^ (R → ¬ φ)) → ¬R)
. And for the second would be:
(((¬R → ψ) ^ (¬R → ¬ ψ)) → ¬¬R)
In each case, what will you use for φ, what will you use for ψ?

12 February
Last two track homework! Do problem 1 and 3 or problem 2 and 3.

1. Read chapters 11-16 of A Concise Introduction to Logic. Do problems:
  • Chapter 11, problems 1a, 1b, 1c, 2b, 2c.
  • Chapter 12, problem 2
  • Chapter 13, problems 1a, 1e
  • Chapter 14, problems 1a, 1b

2. It turns out that we can say everything that can be said in the propositional logic using just two connectives: ¬ and →. Prove this by making expressions equivalent to (P v Q) and (P ^ Q) and (P ↔ Q) that use only negation and conditionals. Make truth tables to show that your expressions have the same meaning as the disjunction, conjunction, or biconditional, respectively.

3. Prove the following theorems.
  • (∃x∀yFxy → ∀y∃xFxy)
  • (∀x∃y(Fx → ∃z(Gz → Hy)) → ((∃xFx ^ ∀xGx) → ∃xHx))
24 February
Prove the following. You can do these informally. Dean rightly asked: hey, how can we know when it is OK to skip steps? I would say if something does not seem obvious to you, then don't skip it.
  1. If A ⊆ B and B ⊆ A, then A = B
  2. If A ⊆ B and B ⊆ C, then A ⊆ C
  3. A ⊆ B if and only if A ∩ B = A
  4. A ⊆ B if and only if A ∪ B = B
4 March
Problems due at the beginning of class. Try to prove 1-3 using just our axioms and modus ponens and any earlier theorem we proved (such as (P → P) or the earlier problems).
  1. (A → (B → C)), (A → B) |-L (A → C)
  2. |-L ((A → B) → ((C → A) → (C → B)))
  3. |-L (B → (¬B → C))
  4. prove that B ∩ ∅ = ∅
  5. prove that B ∪ ∅ = B
Remember that "|-L" just means provable in our system L. The stuff on the left is premises; the stuff on the right is your conclusion.

I'll allow you the chain rule to make things simpler. For problem 3, I'll allow the deduction theorem (this means that you can do the equivalent of a conditional derivation). If you get hopelessly stuck, use deduction theorem on each one, and I'll give partial credit. Remember, I'm calling this the chain rule:
(A → B)
(B → C)
(A → C)

For those problems in set theory, prove them using the natural deduction system (that is, formally). Be sure to remember the definition of empty set is either ¬∃x x ∈ {} or ∀x ¬ x ∈ {}. Use whichever is convenient. And remember that ∅ just means {}.

Hints: I was asked how many steps problem 2 took me. If you use the chain rule, it can be done in three lines! For problem 3, I think it's long without using deduction theorem, so use that. Then, you should be able to prove it in under 12 lines. Remember that the deduction theorem is just like conditional derivation. So the proof will have the shape:
1. B ...Hypothesis
2. ¬B ... Hypothesis
3. work work... ...Axioms and stuff
4. C ...modus ponens using earlier lines
5. |-- (¬B → C) ...deduction theory 2-4
6. |-- (B → (¬B → C)) ...deduction theory 1-5
7 March
The last chapter of my intro logic book includes the axioms, the proof of the deduction theorem, and other relevant material. It's at
9 March
In class, we'll review the deduction theorem. I'm make you prove it! So review it before class. Then we'll prove consistency of the PL. We will begin the proof of the completeness of PL.
11 March
In class, we'll finish the complete proof of PL.

Some practice, based on our discussion of the last homework. Please bring your answers to class.
  1. |-- (¬¬ φ → φ)
  2. |-- ((¬ψ → ¬φ) → (φ → ψ))
  3. Prove ((A ∩ (B ∪ C)) = ((A ∩ B) ∪ (A ∩ C)))
For problem 1: use deduction theorem. Hint: consider the following instance of A3: ((¬φ → ¬¬φ) → ((¬φ → ¬φ) → φ)). Also: remember that (¬φ → ¬φ) is an instance of a theorem that we proved and so you can assert it anytime.
For problem 2: deduction theorem and/or chain rule if you like. Consider the following instance of A3: ((¬ψ → ¬φ) → ((¬ψ → φ) → ψ)).
For problem 3: you are proving, ∀x((x ∈ A ∧ (x ∈ B v x ∈ C)) ↔ ((x ∈ A ∧ x ∈ B) v (x ∈ A ∧ x ∈ C))). A rule that I don't teach, but one of which might be helpful in making this proof shorter, is constructive dilemma. Constructive dilemma is:
(P v Q)
(P → R)
(Q → R)
14 March
16 March
Midterm. Natural set theory. Using the three axioms of standard propositional logic. Proving the Deduction Theorem. Proving completeness of the Propositional Logic if you already have Kalmar's lemma. What is completeness? What is consistency?
14 March
Axiomatic first order logic.

Regarding the test: the range was 4-50, with an average of and a standard deviation of .
1 April
Our brief notes about models.

Practice: Consider a model where the domain is natural numbers. For each of the following predicates (each given either in the usual math notation, or stated in English), give an example of a relations that satisfies the predicate. The relation will be a set containing ordered n-tuples for an arity n predicate, written between "< >", which we take to mean ordered set. You need give only one or two examples of those n-tuples. So, for example, {...<3, 6>...} would be an example for the predicate "___ is half the size of ___."
  1. >
  2. =
  3. a prime less than or equal to....
  4. ... is a number between ... and ....
5 April
Dean rightly pointed out that my proof illustrating how to use induction was somewhat trivial given that I took as a rule indiscernability of identicals. Here then is a version that doesn't do that. Recall that we let φ(x) be x=0+x. To keep the proof short, I cheated: I assume the theorem (which takes about 20 lines to prove) that (x = y → (y = z → x = z)). Call this theorem 1. And I assume that (x=y → y=x); call this theorem 2; this one takes fifteen steps.
    Base case
  1. 0+0=0 A5
    induction axiom
  2. (φ(0) → (∀x(φ(x) → φ(x')) → ∀φ(x))) A9
  3. (∀x(φ(x) → φ(x')) → ∀φ(x)) MP 1, 2
    induction hypothesis
  4. x = 0 + x hyp
  5. (x = 0+x → x' = (0+x)') A1
  6. x' = (0+x)' MP 4, 5
  7. 0+x' = (0+x)' A6
  8. (0+x' = (0+x)' → (0+x)' = 0+x') Theorem 2
  9. (0+x)' = 0+x' MP7, 8
  10. (x' = (0+x)' → ((0+x)' = 0+x' → x' = 0+x')) Theorem 1
  11. ((0+x)' = 0+x' → x' = 0+x')) MP 6, 10
  12. x' = 0+x' MP 9, 11
  13. (x = 0 + x → x' = 0 + x') DT 4-12
  14. ∀x(x = 0 + x → x' = 0 + x') Gen
  15. ∀φ(x) MP 3, 14
4 April
Let's play with arithmetic to get a feel for it. Using just the axioms of S (page 150 in the latest edition of our book) prove that:
  1. 3+2 = 5 (In the almost-object language, this means you'll prove that 0''+0'''=0'''''.)
  2. 3 x 2 = 3 + 3 (In the almost-object language, this means you'll prove that 0''*0'''=0'''+0'''.)(Hint: look at S8; the dot is multiply.)
Let's introduce the following principle: we won't care about the order of the elements in the function + or *. That's a bit of a cheat, but we know it's true that they give the same answer either way, so let's make our lives easier. Thus, we'll say x+0 can be freely substituted for 0+x, and so on. Call that "theorem +" and "theorem *". Just to make your proofs shorter.
Theorem +: x + y = y + x
Theorem *: x * y = y * x
15 April
Practice due at the beginning of class. Try these early and then let me know if you need hints. You can use any rules of propositional logic, but not any axioms or corollaries of modal logic other than the axioms specified in the system you are to use as specified in the question. Remember that all our systems contain M, so all of them include m1 and m2 as axioms.

Remember that For these questions, it is helpful to use contraposition and also the fancy new version of DN.
  1. Prove in S4 (that is, using only m1, m2, m4, necessitation): ([]A ↔ [][]A)
  2. Prove in S4 (that is, using only m1, m2, m4, necessitation): (<>A ↔ <><>A)
  3. Prove in S5 (that is, using only m1, m2, m5, necessitation): ([]A ↔ <>[]A)
  4. Prove in S5 (that is, using only m1, m2, m5, necessitation): (<>A ↔ []<>A)
20 April
We'll review our homework, and then discuss essentialism. Also worth mentioning: Chomsky recent views on object permanence.
22 April
  1. Show S5 is equivalent to the combination of Brouwer and S4. How will you do this? Show that from the axioms of Brouwer (m1, m2, m3) and S4 (m4) you can derive the other axioms of S5 (m5); and then show that from the axioms of S5 (m1, m2, m5) you can derive the other axioms of Brouwer and S4 (m3 and m4).
  2. You can redo any of the last problems, and I'll give you most of the credit.
I was asked for hints. You're proving three things. First, that you can derive m5 using only m1, m2, m3, and m4. Second, that you can derive m3 using only m1, m2, and m5. Third, that you can derive m4 using only m1, m2, and m5.

The second one is perhaps easiest. Prove (φ → <> φ); and consider chain rule.

The first is perhaps next easiest. Consider that with m4 you can prove (<><>φ → <>φ). Now with necessitation you get [](<><>φ → <>φ). With m2 what can you derive from that? Finally, consider the following instance of m3: (<>φ → []<><> φ). Invoke the power of chain rule!

What about the third one? I think that's toughest. Note that we proved above that in S5 we can prove (φ → []<>φ). An instance of this theorem is ([]φ → []<>[]φ). Keep in mind the handy equivalence (¬<>¬φ ↔ []φ). With the following instance of m5, (<>¬φ → []<>¬φ), and a lot of contraposition and messing with DN and using that handy equivalence, you can get (<>[]φ → []φ). With necessitation and m2, you can get ([]<>[]φ → [][]φ). Chain rule!

By the way: primitive reference, at 2:00 minutes.
25-29 April
I will be in Arizona at the big Consciousness Conference. However! We will still have class online--we'll log into blackboard, I'll record a lecture, and we can discuss in a usergroup. So don't plan to use our class hour for anything else!
25 April
Two problems for you. They are rather conceptual/philosophical.
  1. What if we let α mean always, and σ mean sometimes (at least once). Suppose ¬α¬φ is the same as σφ. What should our axioms for such a logic of time be? At the very least, consider each of m1-m5; for each in turn, does it make sense (where you replace box with α and diamond with σ)? Does it seem to capture what you know about time? (Also, assume that your sentences are not time-stamped; thus, to say "P" means that P is true now. Now takes the place that the actual world takes in modal logic.)
  2. Given Kripke's semantics (and his claims about the required accessibility relation), is S5 a good logic of time (given our usual assumptions and experience of time)?
Email me your answers before the end of Tuesday.
27 April
Three questions.
  1. What if we let Οφ to mean it ought to be so that φ. Let ¬Ο¬φ be Ρφ. Interpret this to mean, it is permissible that φ What should the axioms of a logic of ethics be? Consider m1-m5, replacing box with Ο and diamond with Ρ. Which of these seems appropriate? Why? Would you propose any axioms of your own?
  2. Many think that (given the rewrite I describe above) the system M is probably not a good logical system to use as our deontic logic. Why do you think so? Explain.
  3. One of the first deontic logics was proposed by Mally. It had the following axioms. "u" is a constant that means some unconditionally obligatory fact.
    • DM1: (((S → ΟQ) ^ (Q → R)) → (S → ΟR))
    • DM2: (((S → ΟQ) ^ (S → ΟR)) → (S → Ο(Q ^ R)))
    • DM3: ((S → ΟQ) ↔ Ο(S → Q))
    • DM4: Οu
    • DM5: ¬(u → Ο¬u)
    • DM6: ΟS if and only if for every Q, (Q → ΟS)
    DM6 is expressed in the metalanguage because we don't want to go second order. Many philosophers considered this system problematic because of some of the theorems of the system. For example, ((ΟS v ΟQ) → Ο(S v Q)) is a theorem. Many find that objectionable. Another more obvious problem is related to problems you probably already discussed above: it turns out you can derive (ΟQ ↔ Q) in the system. What about such a theorem would seem to contradict what most of us assume about ethics?
Send me your thoughts before the end of Thursday.
29 April
Some thoughts about the last homework:

A few of you did not object to axiom m1; you're crazy! Do you really believe that everything that ought to be true, is true? That seems inconsistent with our experience of the world.

Some of your answers to your own axioms for deontic logic:
  • Jared proposed as an axiom (Οφ → ΟΡφ), which seems reasonable.
  • Ian and Dean proposed (Οφ → Ρφ), which is a good idea; if we ditch axiom m1 I think we will need it.
  • Here's a very interesting one one: Patrick proposed Ο(Ρφ → ΟΡφ).
Here's an extra-credit problem. Years ago, a woman I knew who was getting her MBA came to me to tell me about an argument her instructor had made. It bothered her, because she thought it invalid. Can you suggest whether you think it is invalid, and if so why? If you think it is invalid, can you identify what it seems to presume that a deontological logic might reject? If you think it is valid, do you think it is sound (that is, do you accept all the premises as true)?
  1. You should be excellent.
  2. If you live in our society, then if you are excellent then you make much money.
  3. You live in our society.
  4. Conclusion: you should make much money.
Assume that "you should" and "it ought to be the case" are the same. So "You should be excellent" and "it ought to be the case that you are excellent" would both be ΟE if E meant "You are excellent."

(Hint: here's something that bothered me, and I told my friend about it. Compare the following two hypothetical duties: Do you have a duty to get your clothes wet? Do you have a duty to save the life of a child if you can do so without undue danger? I suspect you answered no and yes. Now, suppose there is a child drowning in a shallow pond, and you can save the child without undue danger. Note that if you save the child then you will get your clothes wet.)
4 May
Last homework! Answer the following questions. Use all your propositional logic skills and theorems and rules.
  1. Using just system M, prove the following theorem: (([]A ^ <>B) → <>(A ^ B)).
  2. Using just system M, prove that if we have as a premise that []A, then we can prove [](B → A).
  3. Let the basic deontic logic be the first three of these axioms:
    DB1 (ΟQ → ΡQ)
    DB2 (Ο(Q ^ R) ↔ (ΟQ ^ ΟR))
    DB3 Ο(Q v ¬Q)
    DB4 (ΟQ → ((Q → ΟS) → ΟS))
    DB5 Ο(ΟQ → Q)
    Prior proposed the fourth formula as an additional axiom. Many philosophers add the fifth. (I'm using Omicron and Rho in those formulas; sorry that it just looks like O and P.) In this system, with all five axioms and any propositional logic rules (but there is no rule like necessitation), prove the following argument is valid, so that our robot programmers can know they saved the day. Shakey the robot reasons as follows:
    One ought to save Tommy. One ought not to endanger Timmy. If one is to save Tommy, one ought to guard the cliff. If one is not to endanger Timmy, one ought to hide the knives. So, one ought to both guard the cliff and hide the knives.
Some hints. For #1, use a conditional derivation, and then an indirect derivation. Also, consider that (A^B) is really just (or, is replaceable with) ¬(A → ¬B). Here's a novel suggestion for #2: use good old axiom 1 (which, if you like, you can just call a theorem). Look at m2, while you're at it. Let me know if you want other hints.
6 May
Let's review our homework and any other questions you may have. Here are the overheads of our recent discussions.

Please complete today the online evaluation of my professoring. You should have received a link in email. I don't see your evaluations until after final grades are turned in.
9 May
Final exam 10:30 am - 12:30 pm.
Questions will concern:
  • proofs in FOL with just the axioms;
  • Cantor's theorem;
  • simple proofs in arithmetic;
  • simple proofs in modal logic;
  • conceptual questions about modality;
  • essentialism, de dicto vs. de re.
Douglas asked me about a proof of Cantor's Theorem. I have a version at: