**Natural Deduction**

**I. Four rules**

Having learned from truth tables that we can identify simple valid argument patterns, we can now use a set of those patterns as rules or models. That is, we can be confident that whenever we encounter one of those valid patterns, even though the content is different, we really are looking again at an argument about which we can say that its conclusion follows from its premises.

Let’s make this more clear with some examples. Possibly the most obviously valid argument form, intuitively, is the **Disjunctive Syllogism.**The form, which we’ll begin abbreviating now as **DS**, is this

**p v q / ~p // q**

Either p or q, but not p, so q.

This is the form of the following argument:

Either apples and bananas are both vegetables or else they are both fruit. But it’s false that Apples and Bananas are vegetables, so they must be fruit.

In symbols:

(A ∙ B) v (F ∙ R)

A= Apples are vegetables

B= Bananas are vegetables

F= Apples are fruit

R= Bananas are fruit

(A ∙ B) v (F ∙R) / ~(A ∙ B) // F ∙ R

For that matter, this one has the same form:

Either apples, bananas and pears are all vegetables or else they are all fruit. But it’s not the case that appples, bananas and pears are all vegetables, so they must all be fruit.

((A ∙ B) ∙ P) v ((F ∙ R) ∙ U) / ~((A ∙ B) ∙ P) // (F ∙ R) ∙ U

It’s very obvious that this is a valid argument, but if we were to try to show it by a truth table, it would take 64 lines! We would probably spend half an hour setting up and filling in the values, and we can tell that that would be a waste of our time. (The previous version would have required 16, since it had four simple statements.) We can tell that there is no way that that table is going to show us anything other than validity. It is obvious that we have here two cases of the same form : in one case the variable “p” was replaced by a conjunction “A ∙ B” and the variable “q” was replaced by the conjunction “F ∙ R.” In the other case the variable “p” was replaced by the conjunction of a conjunction “(A ∙ B) ∙ P” and the variable “q” was replaced by the conjunction of a conjunction “(F ∙ R) ∙ U”

So we are going to streamline things a bit and say that DS is a valid argument form or pattern, and that whenever we spot it, no matter how many simple statements may be involved, we are going to approve the conclusion as following from the premises.

That is, whenever we see

[(A ∙ C) ⊃ V] v (L ≡ F)

~[(A ∙ C) ⊃ V]

we are going to say the following is the result, because those are just the premises of a Disjunctive Syllogism:

L ≡ F

Doing this is treating DS as a model or rule that justifies us in drawing inferences from premises that match the form of its premises. (The key to saying that the premises of an argument match the form of DS, is that the main operators of the premises are the same as the main operators of the DS form’s premises.) We’ll call it a **Rule of Inference**. It’s the first of eight very simple ones that we will introduce in this chapter. This shift lets us move away from the utterly mechanical work of truth tables to the more creative and insightful work of constructing chains of inferences starting off from given premises.

To keep your truth table skills sharp, you might want to build yourself a table for all or some of these, in order to see that they truly are valid forms.

**2) p ⊃ q / p //**** q**

is known as **MP (Modus Ponens)**

**3) p ⊃ q / ~q // ~p**

is known as **MT (Modus Tollens)**

Before we go any further, it’s important to know what is being said in these argument forms; the best way to get a handle on that is to describe what each premise is, or can be said to be relative to the other premise. In Modus Ponens (which can be summed up as “If p then q; p, so q”), then, we would say: we have a conditional statement, and the affirmation of its antecedent; this always justifies inferring the consequent.

In Modus Tollens (“if p, then q; but not q, so not p”), we have a conditional statement and the negation of its consequent; this always justifies inferring the negation of the antecedent.

So if we were trying to build an argument, with the conclusion “Doug’s not here today,” and we already had the premise “If Doug were here today, Joe would have seen him,” we’d be able to say what missing premise would supply the link. The missing premise would be “But Joe didn’t see him.”

Up to this point in the course, we’ve used the name “Hypothetical Syllogism” to name *any* sentential argument that had at least one conditional statement in it. As you’ve just seen, we’ve just differentiated between two such arguments, and given them specific names. So from this point on, we will be reserving “Hypothetical Syllogism,” **HS**, for an argument made up of *only* conditional statements, such that the consequent of one premise is the antecedent of the other, and the conclusion is the conditional that results from the connection so made:

**4) p ⊃ q / q ⊃ r // p ⊃ r** **HS**

**II. Three More Rules**

The next three are extremely simple and obvious:

**5) p ∙ q // p**

is known as **Simplification (SM)**, and just says that if two statements are both true, then one of them is true. But note that the rule is very specific, it says that if two are true, then the one on the left is true. Of course, it is also the case that the one on the right is true, but the rule is written in such a way as to let us infer only that the one on the left is true. In a few minutes we’ll introduce a common-sense rule that says you can switch the order of statements occurring in a conjunction. Be patient. Follow the rules.

**6) p / q // p ∙ q**

is known as **Conjunction (CN),** and just says that if any two statements are given as true, then the conjunction built out of them is also true.

**7) p // p v q**

is known as **Addition (AD)**

Addition calls for a comment, and you will probably need some encouragement to begin using it, because even though it is obviously valid by a consideration of the truth table for the wedge (as long as one statement in a disjunction is true, the disjunction is true), this move will seem like cheating when you do it on paper:

*Eileen is standing on the south rim of the Grand Canyon, so either she is standing on the south rim of the Grand Canyon or she is standing on the north rim.*

You might well say “where did this sentence about the north rim come from?” And the answer would be “nowhere, we just pulled it out of thin air.” It’s the one time you can really have whatever you want, *for nothing* –when you **add **a statement onto a true statement by using a wedge. Make sure you use a **wedge** for this, not a dot: this sense of “adding” is *not* “one **and** one makes two.” The result of Addition is always a Disjunction.

**III. Three Simple Equivalences**

Since we have just introduced rules that involve dots and wedges, I want to mention three trivial points that always come up. As you know from glancing at a truth table, the conditions under which a conjunction or a disjunction is true or false does not include the order in which the truth-values occur. That’s different in the case of the horseshoe; in conditional statements, the order in which the truth-values occur matters very much.

So it makes no difference, from the logical point of view, whether we say “p ∙ q” or “q ∙ p.”

And it makes no difference whether we say “p v q” or “q v p.”

When the order of the statements can be altered without change of truth-value, we say that the change is the logically equivalent **Commutation **of the original. And obviously, if you commute the commutation, you are back at the original.

Because Commutation is a valid operation for dots and wedges, it is true that if we have it in mind, we might want to say that the following could also exemplify SM (simplification):

(A ⊃ B) ∙ (C ≡ R)

/ C ≡ R

In drawing this inference, it would be best to justify it by mentioning two rules: SM and CM

Spelling this out exhaustively now:

1. (A ⊃ B) ∙ (C ≡ R)

2. (C ≡ R) ∙ (A ⊃ B) CM, 1

3. C ≡ R SM, 2

This shows the intermediate step of Commutation, spelled out. But we can abbreviate this to one step and list both justifications together.

Rules that state equivalences are a little different from rules of inference, because they allow us to move in either direction, i.e., they justify replacing a statement with an equivalent form, regardless of which is the form first encountered.

This will be represented by the use of four dots, ::, set between statement forms that are being identified as equivalent. Those dots have the effect of meaning that we are making a claim about the statements themselves, rather than making a claim about the things statements name. So for instance,

P ≡ C might say “You are President if and only if you are Commander-in-Chief”,

but

(p ≡ q) : : [(p ⊃ q) ∙ (q ⊃ p)] says

“the statement (p ≡ q) is equivalent to the statement [(p ⊃ q) ∙ (q ⊃ p)]”

**8) Commutation (CM)**

(p ∙ q) :: (q ∙ p)

(p v q) :: (q v p)

It is an equally trivial matter to point out that the negation of a negated statement is equivalent to the original statement. Whenever we want to (or need to) it will always be allowable to write two tildes in front of an expression that had none before, or to take two off, if there were two to start. The rule that states this equivalence is called Double Negation.

**9) Double Negation (DN)**

**p :: ~ ~ p**

It is not quite as trivial as these –but quite close!– to point out that when we are dealing with three or more statements all linked by dots, or with three or more statements all linked by wedges, the grouping by parentheses is not going to affect the truth-value of the whole statement.

Why don’t you write up the truth table for Association (AS) yourself, to see that this is so?

**10) Association (AS)**

**((p ∙ q) ∙ r) :: (p ∙ (q ∙ r))**

Also: **((p v (q v r) :: (p v (q v r))**

It should be obvious that Commutation and Association can be combined to regroup and reorder dot statements and wedge statements without changing their truth-values.

**IV. Constructive Dilemma**

The last rule (the 11th) in this set of rules of inference is the only one that really requires much effort to grasp. In fact, I think it makes better sense if you hear it described first, before you see how it looks. Try to visualize this as you read about it.

A **conjunction of conditional statements** is one premise.

The next premise is the **disjunction of the antecedents** of the conditionals.

The conclusion is the **disjunction of the consequents**.

It’s called a “**constructive dilemma**.” **CD**

**11) (p ⊃ q) ∙ (r ⊃ s) / p v r // q v s**

Here’s an example, of the sort we might well encounter in real life on a television show: Imagine a woman comes home from work, and finds her husband despondent, sitting in the kitchen, and she says, “We can go out for dinner if you’re sick of cooking, and if you’re sick of staying home every night, we can go to a movie. I can tell from your face that you’re either sick of cooking or sick of staying home every night, so we’re either going out for dinner or for a movie.” (Given the weak sense of “or,” he might even get dinner *and* a movie.)

I hope the reversal of the antecedent and consequent in those statements didn’t confuse you. This is how that would look symbolized:

(C ⊃ D) ∙ (S ⊃M) / C v S // D v M

**V. All Together Now**

So here are these eight rules of inference:

**MP** p ⊃ q / p // q

**MT** p ⊃ q / ~q // ~p

**HS** p ⊃ q / q ⊃ r // p ⊃ r

**DS** p v q / ~p // q

**SM** p ∙ q // p

**CN** p / q // p ∙ q

**AD** p // p v q

**CD** (p ⊃ q) ∙ (r ⊃ s) / p v r // q v s

And then here are the other rules I mentioned, which are called “Rules of Equivalence (or replacement)”:

**CM** (p ∙ q) :: (q ∙ p) Also: (p v q) :: (q v p)

**AS** (p ∙ (q ∙ r)) :: ((p ∙ q) ∙ r) Also (p v ( q v r)) :: ((p v q) v r)

**DN** p :: ~ ~p

They are not too challenging to *understand*, but like much in Logic, the challenge lies in mastering them. To begin to do that, we need to begin to be able to spot them. Spotting them means being able to recognize them as the basic underlying patterns of arguments by focusing on the main operators of the premises and conclusions of arguments.

Before you even try to do these basic exercises, however, you would be smart to sit down and copy over on paper –five or six times each– each of these rules. As you do so, describe (out loud –yes, *out loud*, and *in complete sentences*) what you are writing, using words like “antecedent,” “consequent,” “negation,” “conjunct,” etc. The more you can name and describe what you are doing, the greater your intellectual command of it. If you are guessing, or trying to remember things by mere rote, you are wrecking your chances of learning the material.

In the next section you have two sets of exercises. First it is a matter of seeing which Rule pattern is exemplified: focus on the main operators of the premises, and see which rule each one if a case of . This is a matter of Recognizing the Patterns that the rules are. Do not be concerned about the order in which you find the premises, that really makes no difference. Afterwards, you will have a “fill in the blank” exercise, where you will see a one or more lines missing, and the task is to determine by what rule you can enter something into those blanks.

To learn these rules, sit down four or five times each day and write these eight rules out exactly as they are presented here (with lower-case letters –variables– not upper-case statement names), until you can write all of them from memory quickly. You should also make sure that you can describe what each of them does, by focusing on what the main operators of the premises are and what the conclusion is, relative to the premises. Doing this is crucial to using them effectively to write proofs. (Also, we have seven more Rules of Equivalence to learn, and having these eight Rules of Inference “down” will facilitate learning them and using them.)

A powerpoint is attached, which provides more examples with these eleven rules; and exercises doing simple proofs with them yourself are provided in 9.3.

Here’s the next part of the video:

Another Powerpoint that’s attached is called “First Rules at Work,” which will walk through several inferences and proofs step by step.

A third attachment is a WORD document called “Rules,” listing the 8 Rules of Inference and the 10 Rules of Equivalence or Replacement that we will be working with. You will want to print yourself a copy of it. Be advised: it is essential that you learn and commit to memory the 8 rules of inference. If you don’t, you will never gain facility at using them. You should also commit the rules of equivalence to memory as we learn them, but for purposes of tests, I will provide those ten for you to consult (to be sure you are applying them accurately). These 18 rules will be in play for the rest of the semester, even when we delve into Predicate Logic at the end.