Tag Archives: logic

Paradox of Logical Privilege

Let us assume that logic cleaves the world at its corners. Then everything can be divided into the logically privileged, that which makes up the corners, and the not logically privileged, that which makes up everything else.

Where then does the concept of logical privilege fall?

If logical privilege is logically privileged, then it describes it as something that is at the corners, and not the content. But then it must describe not have described itself, as something within the world. Hence it must not have been logically privileged in the first place.

If, on the other hand, logical privilege is not logically privileged, then it can not describe how the world is broken up into logical privilege and non logical privilege. This violates the initial assumption that the world can be so broken up. Hence logical privilege must be logically privileged.


I actually am rather certain there is something very wrong with the above, but I am testing out a new feature on the blog and needed a test post. So I dug this out of the drafts from May 2, 2014 as it is more interesting than saying ‘test post, please ignore’.

Posted in logic, philosophy. Tagged with , , .

Punny Logic

Update 12 Feb: This post had been expanded upon and, after submission, accepted for publication in Analysis published by Oxford University Press. View the final version here.

[draft]

It is hard to explain puns to kleptomaniacs because they take things literally.

On the surface, this statement is a statement of logic, with a premise and conclusion.

Given the premise:

Kleptomaniacs take things literally.

We may deduce the conclusion:

It is hard to explain puns to kleptomaniacs.

Now, whether the conclusion strictly follows from the premise is beside the point: it is a pun, and meant to be funny. However, as a pun, it still has to make some logical sense. If it didn’t make any sense, it wouldn’t, and couldn’t, be funny either. While nonsense can be amusing, it isn’t punny.

What is the sense in which the conclusion logically follows from the premise then, and how does this relate to the pun?

Puns play off ambiguity in the meaning of a word or phrase. In this case the ambiguity has to do with the meaning of to take things literally. It can mean to steal, or it can mean to only use the simplest, most common definitions of terms.

In the first meaning, by definition, kleptomaniacs steal, i.e. they literally take things.

So then “take things literally” is true.

In the second meaning, by deduction, since puns play off multiple meanings of things, it is hard to explain a pun to someone who only uses the single, most common definition of a term. That is, if they take things literally, they won’t recognize the multiple meanings required to understand a pun.

So if someone “takes things literally” it is true that it is hard to explain puns to them.

Therefore, between the two meanings, we can informally derive the statement: it is hard to explain puns to kleptomaniacs because they take things literally.

However, if we wanted to write this out in a formal logical language, then we would need a formal way to represent the two meanings of the single phrase.

Classically, there is no way to give a proposition multiple meanings. Whatever a proposition is defined as, it stays that way. A can’t be defined as B and then not defined as B: (A=B & A≠B) is a contradiction and to be avoided classically. But let’s start with a classical formulation:

Let:

TTL1 mean to Take Things Literally, in the 1st sense: to steal

TTL2 mean to Take Things Literally, in the 2nd sense: to use the most common definitions of terms.

Then

  1. ∀x [ Kx → TTL1x ]
    For anyone who is a Kleptomaniac, Then they take things literally (steal)
  2. ∀y[ TTL2y → Py ]
    For anyone who takes things literally (definitionally), Then it is hard to explain puns to them

What we want, however, is closer to:

  1. ∀z [[ Kz → TTLz] → Pz ]
    For anyone who is a Kleptomaniac, Then they take things literally, Then it is hard to explain puns to them

with only one sense of TTL, but two meanings.

Since TTL1 ≠ TTL2, we can’t derive (3) from (1) and (2), as is. And if TTL1 = TTL2, then we would have (1) A→B, and (2) B→C, while trying to prove (3) A→B→C, which logically follows. However, there would no longer be a pun if there was only one meaning of TTL.

What is needed is to be able to recompose our understanding of ‘to take things literally’ in a situation aware way. We need to be able to have the right meaning of TTL apply at the right time, specifically Meaning 1 in the first part, and the Meaning 2 in the latter.

Intuitively, we want something like this, with the scope corresponding to the situation:

  1. ∀z [ Kz → { TTLz ]1 → Pz }2

In this formula, let the square brackets [] have the first meaning of TTL apply, while the curly braces {} use the second meaning. Only the middle — TTL — does double duty with both meanings.

Achieving this customized scope can be done by using Independence Friendly logic. IF logic allows for fine-grained scope allocation.

So let:

S mean to steal.

D mean to take things definitionally.

Then:

  1. ∀x ∀y ∃u/∀x ∃v/∀y [ Kx → ( x=u & y=v & Su & Dv → TTLvu ) → Py ]
    If anyone is a kleptomaniac then there is someone who is identical to them who steals… and if there is someone who takes things definitionally then there is someone identical to them for whom it is hard to explain puns to… and the person who steals and the person who takes things definitionally then both Take Things Literally.

The scope gymnastics are being performed by the slash operators at the start and the equality symbols in the middle part of the equation. What they are doing is specifying the correct meanings — the correct dependencies — to go with the correct senses: Stealing pairs with Kleptomania and taking things Definitionally pairs with being bad at Puns, while both pairs also meaning Taking Things Literally. With both pairs meaning TTL, and each pair being composed independently, Equation (5) therefore provides a formalization of the original pun.

Discussion

Finding new applications for existing logical systems provides a foundation for further research. As we expand the range of topics subject to logical analysis, cross-pollination between these subjects becomes possible.

For instance, using custom dependencies to associate multiple meanings to a single term is not only useful in describing puns. Scientific entities are often the subjects of competing hypotheses. The different hypotheses give different meanings — different properties, relations and dependencies — to the scientific objects under study. Logically parsing how the different hypotheses explain the world using the same terms can help us analyze the contradictions and incommeasureabilities between theories.

On the other hand, while this article may have forever ruined the above pun for you (and me), it does potentially give insight into what humans find funny. Classically, risibility, having the ability to laugh, has been associated with humans and rationality. Analyzing this philosophical tradition with the new logical techniques will hopefully provide existential insight into the human condition.

Posted in independence friendly logic, logic. Tagged with , , , .

Shaking the Tree

Life often results in situations such that no strategy suggests any further moves. We just don’t know what to do next. In a game of perfect information, where each player knows all the previous moves, this can signal stalemate. Take chess: given both sides know everything that has transpired and have no reason to believe that the opponent will make a mistake, there can come a time when both sides will realize that there are no winning strategies for either player. A draw is then agreed upon.

The situation is not as simple in games of incomplete information. Let’s assume some information is private, that some moves in the game are only known to a limited number of players. For instance, imagine you take over a game of chess in the middle of a match. The previous moves would be known to your opponent and the absent player, but not to you. Hence you do not know the strategies used to arrive at that point in the game, and **your opponent knows that you do not know**.

Assume we are in a some such situation where we do not know all the previous moves and have no further strategic moves to make. This is to say we are waiting, idling, or otherwise biding our time until something of significance happens. Formally we are at an equilibrium.

A strategy to get out of this equilibrium is to “shake the tree” to see what “falls out”. This involves making information public that was thought to be private. For instance, say you knew a damaging secret to someone in power and that person thought they had successfully hidden said secret. By making that person believe that the secret was public knowledge, this could cause them to act in a way they would not otherwise, breaking the equilibrium.

How, though, to represent this formally? The move made in shaking the tree is to make information public that was believed to be private. To represent this in logic we need a mechanism that represents public and private information. I will use the forward slash notation of Independence Friendly Logic, /, to mean ‘depends upon’ and the back slash, , to mean ‘independent of.’

To represent private strategy Q, based on secret S, and not public to party Z we can say:

Secret Strategy) If, and only if, no one other than Y depends upon the Secret, then use Strategy Q
(∀YS) (∃z/S) ~(Y = z) ⇔ Q

To initial ‘shaking the tree’ would be to introduce a new dependency:

Tree Shaking) there is someone other than Y that depends on S
(∃zS) ~(Y = z)

Tree Shaking causes party Y’s to change away from Strategy Q since Strategy Q was predicated upon no one other than Y knowing the secret, S. The change in strategy means that the players are no longer idling in equilibrium, which is the goal of shaking the tree.

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , .

An Introduction to the Game Theoretic Semantics view of Scientific Theory

What is a scientific theory?  In an abstract sense, a scientific theory is a group of statements about the world.  For instance the Special Theory of Relativity has, “The speed of light in a vacuum is invariant,” as a core statement, among others, about the world.  This statement is scientific because, in part, it is meant to hold in a ‘law-like’ fashion: it holds across time, space and observer.

The Popperian view is that we have scientific theories and we test those theories with experiments.  This means that given a scientific theory, a set of scientific statements about phenomena, we can deductively generate predictions.  These predictions are further statements about the world.  If our experiments yield results that run counter to what the theory predicts — the experiments generate statements that contradict the predictions, the theory did not hold across time, space or observer — then the theory eventually becomes falsified.  Else the theory may be considered ‘true’ (or at least not falsified) and it lives to fight another day.

The game theoretic semantics (GTS) view is that truth is the existence of a winning strategy in a game.  In terms of the philosophy of science, this means that our theories are strategic games (of imperfect information) played between ourselves and Nature.  Each statement of a theory is a description of a certain way the world is, or could be.  An experiment is a certain set of moves — a strategy for setting up the world in a certain way — that yields predicted situations according to the statements of the theory.  If our theory is true and an experiment is run, then this means that there is no way for Nature to do anything other than yield the predicted situation.  Said slightly differently: truth of a scientific theory is knowing a guaranteed strategy for obtaining a predicted Natural outcome by performing experiments.  If the strategy is executed and the predicted situations do not obtain, then this means that Nature has found a way around our theory, our strategy.  Hence there is no guaranteed strategy for obtaining those predictions and the theory is not true.

An example:

Take Galileo’s famous experiment of dropping masses off the Tower of Pisa.  Galileo’s theory was that objects of different mass fall at equal rates, opposing the older Aristotelian view that objects of greater mass fall faster.

According to the Popperian view Galileo inferred from his theory that if he dropped the two balls of different mass off the tower at the same time, they would hit the ground at the same time.  When he executed the experiment, the balls did hit the ground at the same time, falsifying the Aristotelian theory and lending support to his theory.

The GTS view is that dropping balls of unequal mass off a tower is a strategic game setup.  This experimental game setup is an instance of a strategy to force Nature to act in a certain way, namely to have the masses hit at the same time or not.  According to Galilean theory, when we are playing this game with Nature, Nature has no choice other than to force the two masses to hit the ground at the same time.  According to Aristotelian theory, when playing this game, Nature will force the more massive ball to hit the ground first.  History has shown that every time this game is played, the two masses hit the ground at the same time.  This means that there is a strategy to force Nature to act in the same way every time, that there is a ‘winning strategy’ for obtaining this outcome in this game with Nature.  Hence the Galilean theory is true: it got a win over the Aristotelian theory.

Why you might want to consider doing things the GTS way:

GTS handles scientific practice in a relatively straightforward way.  Theories compete against Nature for results and against each other for explanatory power.  Everything is handled by the same underlying logic-game structure.

GTS is a powerful system.  It has application to  game theory, computer science, decision theory, communication and more.

If you are sympathetic to a Wittgensteinian language game view of the world, GTS is in the language game tradition.

More on GTS:

http://plato.stanford.edu/entries/logic-games/
https://en.wikipedia.org/wiki/Game_semantics

Posted in game theory, logic, philosophy, science. Tagged with , , , , .

Яandom Logic

If we try to represent tossing a coin or a die, or picking a card out of a deck at random, in logic, how should we do it?

Tossing a coin might look like:

Toss(coin) → (Heads or Tails)

Tossing a die might be:

Toss(die) → (1 or 2 or 3 or 4 or 5 or 6)

Picking a card:

Pick(52 card deck) → (1♣ or 2♣ or … or k♥)

This begs asking, do these statements make sense? For instance look what happens if we try to abstract:

∀x Toss(x)

such that ‘Toss’ represents a random selection of the given object.

But this is weird because Toss is a randomized function and x is not selected randomly in this formula. Perhaps if we added another variable, we could generate the right sort of function:

∀y ∃x Toss(yx)

Then x would be a function of y: we would select x with respect to y. The problem is still that a Toss involves randomness. So this setup is incorrect because treating x as a function of y is not randomized, because y is not random.

How can we represent randomness in logic?

As noted, functions alone will not work. Variables and interpreted objects cannot invoke randomness. Perhaps we can modify some part of our logic to accommodate randomness. The connectives for negation and conjunction haven’t anything to do with randomness either.

But, if we use the game theoretic interpretation of logic, then we can conceive of each quantifier as representing a player in a game. Players can be thought of as acting irrationally or randomly.

Therefore, let’s introduce a new quantifier: Я. Я is like the other quantifiers in that it instantiates a variable.

  1. Яx T(x)
  2. Tb

However, Я is out of our (or anyone’s) control. It does instantiate variables when it is it’s turn (just like other quantifiers) but it instantiates randomly. So we have three players, Abelard, Eloise and Random (or the Verifier, Falsifier and Randomizer).

But more is still needed. We need a random selection between specific options, be it between heads and tails, 1-6, cards, numbers, or anything else. One way of doing this would be to create a special domain just for the random choices. Я would only instantiate from this domain, and if there are multiple random selections, we will require multiple indexed domains.

Hence, given Di(Heads, Tails),
Яix
represents a coin flip since Я randomly instantiates out of the domain containing only Heads and Tails.

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

Now that we have an instantiation rule for Я we also need a negation rule for it. If some object is not selected at random, then it must have been individually selected. In this case the only other players that could have selected the object are ∀ and ∃. Hence the negation rule for Я is just like the negation rule for the other quantifiers: negating a quantifier means that a different player is responsible for instantiation of the variable. If neither player is responsible, it can be considered random: ¬Яx ↔ (∀x or ∃x). We can leave the basic negation rule for ∀ and ∃ the way it is.

Therefore, given the additions of the new quantifier and domain (or slash notation), we can represent randomness within logic.

———

See “Propositional Logics for Three” by Tulenheimo and Venema in Dialogues, Logics And Other Strange Things by Cedric Degremont (Editor) College Publications 2008, for a generalized framework for logics with 3 quantifiers. Since the above logic requires either indexed domains or dependence operators, Яandom Logic is a bit different, but it is a good discussion.

Posted in game theory, logic, science. Tagged with , , , , .

Rock Paper Scissors

Rock Paper Scissors is a game in which 2 players each choose one of three options: either rock, paper or scissors.  Then the players simultaneously reveal their choices.  Rock beats scissors but loses to paper (rock smashes scissors); Paper beats rock and loses to scissors (paper covers rock); Scissors beats paper but loses to rock (scissors cut paper).  This cyclical payoff scheme (Rock > Scissors, Scissors > Paper, Paper > Rock) can be represented by this rubric:

Child 2
rock paper scissors
Child 1 rock 0,0 -1,1 1,-1
paper 1,-1 0,0 -1,1
scissors -1,1 1,-1 0,0
.
(ref: Shor, Mikhael, “Rock Paper Scissors,” Dictionary of Game Theory Terms, Game Theory .net,  <http://www.gametheory.net/dictionary/Games/RockPaperScissors.html>  Web accessed: 22 September 2010)

However, if we want to describe the game of Rock Paper Scissors – not just the payoff scheme – how are we to do it?

Ordinary logics have no mechanism for representing simultaneous play.  Therefore Rock Paper Scissors is problematic because there is no way to codify the simultaneous revelation of the players’ choices.

However, let’s treat the simultaneous revelation of the players’ choices as a device to prevent one player from knowing the choice of the other.  If one player were to know the choice of the other, then that player would always have a winning strategy by selecting the option that beats the opponent’s selection.  For example, if Player 1 knew (with absolute certainty) that Player 2 was going to play rock, then Player 1 would play paper, and similarly for the other options.  Since certain knowledge of the opponent’s play trivializes and ruins the game, it is this knowledge that must be prevented.

Knowledge – or lack thereof – of moves can be represented within certain logics.  Ordinarily all previous moves within logic are known, but if we declare certain moves to be independent from others, then those moves can be treated as unknown.  This can be done in Independence Friendly Logic, which allows for explicit dependence relations to be stated.

So, let’s assume our 2 players, Abelard (∀) and Eloise (∃) each decide which of the three options he or she will play out of the Domain {r, p, s} .  These decisions are made without knowledge of what the other has chosen, i.e. independently of each other.

∀x ∃y/∀x

This means that Abelard chooses a value for x first and then Eloise chooses a value for y.  The /∀x next to y means that the choice of y is made independently from, without knowledge of the value of, x.

R-P-S: ∀x ∃y/∀x (Vxy)

The decisions are then evaluated according to V, which is some encoding of the above rubric like this:

V: x=y → R-P-S &
x=r & y=s → T &
x=r & y=p → F &
x=p & y=r → T &
x=p & y=s → F &
x=s & y=p → T &
x=s & y=r → F

T means Abelard wins; F means Eloise wins.  R-P-S means play more Rock Paper Scissors!

Johan van Benthem, Sujata Ghosh and Fenrong Liu put together a sophisticated and generalized logic for concurrent action:
http://www.illc.uva.nl/Publications/ResearchReports/PP-2007-26.text-Jun-2007.pdf

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , , .

Revision and Hypothesis Introduction

Say we have some theory that we represent with a formula of logic.  In part it looks like this:

[1] …(∃z) … Pz …

This says that at some point in the theory there is some object z that has property P.

After much hard work, we discover that the object z with property P can be described as the combination of two more fundamental objects w and v with properties R and S:

[2] …(∃z) … Pz … ⇒ …(∃w)(∃v) … (Rw & Sv)…

Now lets say that in our theory, any object that had property P depended upon some other objects, x and y:

[3] …(∀x)(∀y)…(∃z) … Pz …

In our revised theory we know that objects w and v must somehow depend upon x and y, but there are many more possible dependence patterns that two different objects can have as compared to z alone.  Both w and v could depend upon x and y:

[4] …(∀x)(∀y)…(∃w)(∃v) … (Rw & Sv)…

However, let’s say that w depends on x but not y, and v depends on y but not x.  Depending on the rest of the formula, it may be possible to rejigger the order of the quantifiers to reflect this, but maybe not.  If we allow ourselves to declare dependencies and independencies, arbitrary patterns of dependence can be handled.  The forward slash means to ignore the dependency of the listed quantified variable:

[5] …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…

Besides the convenience and being able to represent arbitrary dependence structures, I think there is another benefit for this use of the slash notation:  theoretical continuity.  In formula [2] above, there is a double right arrow which I used to represent the change from z to w and v, and P to R and S.  However, I created this use of the double right arrow for this specific purpose;  there is no way within normal logic to represent such a change.  That is, there is no method to get from formula [3] to formula [4] or [5], even though there is supposed to be some sort of continuity between these formulas.

Insofar as the slash notation from Independence Friendly Logic allows us to drop in new quantified variables without restructuring the rest of the formula, we can use this process as a logical move like modus ponens (though, perhaps, not as truth preserving).  Tentatively I’ll call it ‘Hypothesis Introduction’:

[6]

  1. …(∀x)(∀y)…(∃z) … Pz …
  2. …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…      (HI [1])

The move from line one to line two changes the formula while providing a similar sort of continuity as used in deduction.

One potential application of this would be to Ramsey Sentences.  With the addition of Hypothesis Introduction, we can generalize the Ramsey Sentence into, if you will, a Ramsey Lineage, which would chart the changes of one Ramsey Sentence to another, one theory to another.

A second application, and what got me thinking about this in the first place, was to game theory.  When playing a game against an opponent, it is mostly best to assume that they are rational.  What happens when the opponent does something apparently irrational?  Either you can play as if they are irrational or you can ignore it and continue to play as if they hadn’t made such a move.  By using Hypothesis Introduction to introduce a revision into the game structure, however, you can create a scenario that might reflect an alternate game that your opponent might be playing.  In this way you can maintain your opponent’s rationality and explain the apparently irrational move as a rational move in a different game that is similar to the one you are playing.  This alternate game could be treated as a branch off the original.  The question would then be to discover who is playing the ‘real’ game – a question of information and research, not rationality.

Posted in game theory, independence friendly logic, logic, philosophy, science. Tagged with , .

Monty Redux

In the Monty Hall Problem a contestant is given a choice between one of three doors, with a fabulous prize behind only one door. After the initial door is selected the host, Monty Hall, opens one of the other doors that does not reveal a prize. Then the contestant is given the option to switch his or her choice to the remaining door, or stick with the original selection. The question is whether it is better to stick or switch.

The answer is that it is better to switch because the probability of winning after switching is two out of three, whereas sticking with the original selection leaves the contestant with the original winning probability of one out of three. Why?

The trick to understanding why this occurs is to view the situation not from the contestant’s viewpoint, but from Monty Hall’s. At the outset, from Monty’s point of view, the contestant has a one out of three chance of guessing the correct door. In the likely situation (two out of three) that the contestant chose wrongly, Monty then has to know where the prize is among the two remaining doors in order to open a door that does not reveal the prize. So Monty opens a door not revealing the prize and asks the contestant whether he or she would like to switch or not.

However, the contestant knows that in the likely (two out of three) situation that the initial choice was wrong, Monty had to know where the prize was in order to open the door that did not contain the prize. Since the contestant knows that Monty has to know where the prize is to make the correct choice, the contestant can (in this likely case) place him or herself in Monty’s shoes. At this point Monty knows that the remaining door is the one that contains the prize, and hence the contestant should switch.

If we consider the unlikely situation in which the contestant initially chose the door with the prize behind it, then this line of reasoning will not work. Imagine that Monty forgets the location of the prize every time the contestant guesses correctly. In this situation he can still open either of the remaining doors without ever ruining the game. From his perspective the location of the prize is unrelated to his actions; it played no part in his decision to open one door or another (he merely chose a door the contestant hadn’t).

So, in the one out of three case where the contestant initially selected the correct door, there is no way to deduce whether switching is beneficial based upon placing oneself in Monty’s shoes:  the situation where Monty has forgotten the prize’s location is indistinguishable from a situation in which he has not forgotten. Without any way to further analyze the situation and tilt the odds to over one out of three, the contestant should always assume that he or she is in the previous, more likely, situation and take the opportunity to switch.1


.

1Imagine that the contestant has a guardian angel that will let the game run its course if the contestant switches doors, but will change the location of the prize such that if the contestant sticks with the original door the angel will make sure that the contestant wins four out of five times. Then the probability of winning while switching will stay at 2/3 but the probability of winning while sticking will be 4/5. If the contestant had some way of divining that this was happening, this would be a case in which further analysis would be of benefit.


File translated from TEX by TTH, version 3.79.
On 13 Aug 2009, 13:48.

Posted in epistemology, game theory, logic, philosophy. Tagged with , , .

Argument Structure

Basic argument structure goes like this:

  1. Premise 1
  2. Premise 2
  3. ———————–

  4. Conclusion

Knowing how to argue is great, except when someone you disagree with is proving things you don’t like.  In that case you have to know how to break your opponent’s argument or provide an argument that they cannot break.

First thing that most people do to break an argument is to attack premises (assuming no fallacies are present).  To avoid accepting your opponent’s conclusion in line 3, if you can cast doubt on the truth of premise 1, then your opponent will never get to line 3.

Personally I think this sucks.  I hate arguing about the truth of premises because many times people have no idea what the truth is and hold unbelievably stupid positions.

G. E. Moore argued that if the conclusion is more certain than the premises, then you can flip the argument:

  1. Conclusion
  2. Premise 2
  3. ———————–

  4. Premise 1

Instead of arguing about the truth of the premises, this strategy pits the premises against the conclusion by arguing that while the premises imply the conclusion, the conclusion also implies the premises.  Hence there is a question about which should be used to prove the other, and, as long as this question remains, nothing is proved.

This leads to a kind of argument holism.  An argument must first be judged on the relative certainties of its premises and conclusion before the premises can even be considered to be used to derive the conclusion.

Personally I think this is great.  It is possible to just ignore whole arguments on the grounds that the person arguing hasn’t taken into account the relative certainties involved.  If you haven’t ensured that your premises are more certain than your conclusion, then you can’t expect anyone to accept your conclusion based upon those premises.

However this leads to a nasty problem.  If all arguments are subject to this sort of holism, then arguments can be reduced to their conclusions: if the whole argument is of equal certainty, i.e. the conclusion is just as certain as a premise, then there is no reason to bother with the premises.  If we just deal with conclusions, and everyone is certain of their own conclusions, then arguing is useless.

(In practice, of course, only mostly useless.  You can (try to) undermine someone’s argument by finding something more certain and incompatible with the conclusion in question (premises are always a good place to start looking).  For better or worse, though, even when people’s premises have been destroyed, all too often they still are certain of their conclusions.)

Moreover, if everyone is certain of their conclusions, then no conclusion is any more certain than another.  If everything has equal certainty, then nothing is certain.

How to get around this problem of equal certainty?

First let me mention that this is a strictly philosophical problem: in daily life we have greater certainty in some things than we do in others.  For instance I trust certain people, and hence if they say something is true then I will be more certain of it’s truth than if someone else were to say the same thing.  So fair warning: what comes next is a philosophical solution to a philosophical problem.

If something and its opposite are equally certain, then, generally, there is nothing more that we can know about it.  For example if we know that it is either raining or not raining, then we really don’t know much about the weather.   This applies in all cases, except for paradoxes.   In a paradox something and its opposite imply each other. Hence, in a paradox, there is only one thing, not a thing and it’s negation.

Most the time paradoxes only shows us things that cannot exist.  However, if what caused the paradox was the negation of something, then we can have certainty in that thing: it’s negation cannot exist on pain of paradox.

Therefore, to provided a rock solid foundation for an argument, a paradox must be appealed to such that the paradox must have been generated from the negation of the thing to be used as a premise.

As far as I can tell, this is the only argument structure that yields absolutely certain results.  All other arguments styles are subject to questions about the truth of premises and the legitimacy of using those premises (even if true) for proving a particular conclusion.

Posted in argumentation, epistemology, logic, philosophy. Tagged with , , .

The Deal with ‘Deal or No Deal’

I just saw the hit game show ‘Deal or No Deal‘.  It wasn’t the first time, but this episode had a contestant with folksiness to rival Palin, so I was entertained and kept watching.

But is there any gamesmanship to the ‘Deal or No Deal’ gameshow?  The short answer is: No.

The show begins with the contestant choosing a briefcase that contains a number that represents a real monetary amount.  The case is chosen from a group of 26 cases, with the monetary amounts ranging from a penny to a million dollars.  Recently, to up the suspense, the show has removed some of the lower amounts of money and replaced them with more million dollar cases.

The show I saw had 8 of the 26 cases carrying the million dollar value.  So when the contestant makes the initial selection, there is a slightly less than 1/3 chance of picking a million dollar case.  This case is then set aside.

The contestant then proceeds to pick other cases which are immediately opened, revealing the monetary amount they represent.  These cases are removed from the pool of cases.  After a few cases have been removed, the contestant is offered a sum of money to stop playing.  If many of the cases that have been removed were low in value, i.e. most of the million (and other high value) cases remain, then the offer will be closer to the high value cases.  If many of the high value cases have been removed, then the offer will be closer to the lower values.  Usually the value is somewhere in the middle.

These offers are made periodically when there are many cases remaining and are made after every case for the last few.  If you go all the way to the end, then you receive whatever value is in the case you initially selected.

If winning the big prize is the goal, however, all the offers are completely irrelevant.  At the outset the case the contestant chooses has a 1/3 chance of containing the big prize.  This doesn’t change throughout the game.  Let me explain why:

The rest of the cases have the same approximate ratio of million dollar values to non-million dollar values, which the contestant chooses to open randomly.  Therefore most of the time (logically speaking and whenever I watched) this ratio stays constant all the way to the end of the game.  2 cases out of the last 6 were million dollar cases in the episode I just saw.

Of course the possibility exists that the contestant will choose all of the lower value cases such that only million dollar cases remain and hence the case he or she initially chose will necessarily be a million dollar case.

However, imagine this analogous situation.  Try to pick all the cards other than Jack, Queen, King and Ace out of a shuffled deck without looking.  What will happen is that a selection of cards will be chosen irrespective of value, randomly, leaving approximately the same ratio of face cards to non-face cards remaining  (Go try it if you don’t believe me).  The chances of picking only the low values are very small.  Deal of No Deal has been on for years here in the USA and this has never happened.  The recent, and only, million dollar winner still had to choose on the last remaining case. So this part of the game has little ultimate impact upon knowing whether or not you have selected a million dollar case.

Secondly, since the cases are opened randomly during the show, no Monty Hall-like insight can be gained as to whether or not a winning case was initially selected.  Therefore the initial probability of 1/3 remains unchanged throughout the show and all the song and dance of selecting and opening the cases is a red herring (though it is top notch song and dance provided by Mr. H. Mandel and models).

This leaves the contestant in the position of deciding whether or not to accept the offer made to stop playing part way through the game without any new information.  Since the ratio of remaining monetary values remains somewhat constant, the offer made to buy the contestant out of playing should remain somewhat stable for most of the game.  It appears however, according to Wikipedia, that the initial offers are kept artificially low to build suspense, but at the end the offers are where the mathematicians say they should be.

The decision then comes down to how badly the contestant wants/ needs the money.  If the money offered to stop playing becomes large enough to significantly, to the contestant’s mind, make a big difference, he or she will likely take the money rather than take the 2/3 chance of winning significantly less.  This is what happened during the episode today: after it was made known late in the game that a sponsor was going to make a matching donation to a national charity the lady supported, she became too afraid of losing the large amount of money that was already offered, even though she said she wanted to go till the end.

In the end, the deal with ‘Deal or No Deal’ is that it is a great deal for those who get to play.  However, it is not much of a game.  The only trick is to get yourself on the show and after that how much you take home is up to luck.

Posted in game theory, logic. Tagged with , .