Category Archives: logic

Punny Logic

Update 12 Feb: This post had been expanded upon and, after submission, accepted for publication in Analysis published by Oxford University Press. View the final version here.

[draft]

It is hard to explain puns to kleptomaniacs because they take things literally.

On the surface, this statement is a statement of logic, with a premise and conclusion.

Given the premise:

Kleptomaniacs take things literally.

We may deduce the conclusion:

It is hard to explain puns to kleptomaniacs.

Now, whether the conclusion strictly follows from the premise is beside the point: it is a pun, and meant to be funny. However, as a pun, it still has to make some logical sense. If it didn’t make any sense, it wouldn’t, and couldn’t, be funny either. While nonsense can be amusing, it isn’t punny.

What is the sense in which the conclusion logically follows from the premise then, and how does this relate to the pun?

Puns play off ambiguity in the meaning of a word or phrase. In this case the ambiguity has to do with the meaning of to take things literally. It can mean to steal, or it can mean to only use the simplest, most common definitions of terms.

In the first meaning, by definition, kleptomaniacs steal, i.e. they literally take things.

So then “take things literally” is true.

In the second meaning, by deduction, since puns play off multiple meanings of things, it is hard to explain a pun to someone who only uses the single, most common definition of a term. That is, if they take things literally, they won’t recognize the multiple meanings required to understand a pun.

So if someone “takes things literally” it is true that it is hard to explain puns to them.

Therefore, between the two meanings, we can informally derive the statement: it is hard to explain puns to kleptomaniacs because they take things literally.

However, if we wanted to write this out in a formal logical language, then we would need a formal way to represent the two meanings of the single phrase.

Classically, there is no way to give a proposition multiple meanings. Whatever a proposition is defined as, it stays that way. A can’t be defined as B and then not defined as B: (A=B & A≠B) is a contradiction and to be avoided classically. But let’s start with a classical formulation:

Let:

TTL1 mean to Take Things Literally, in the 1st sense: to steal

TTL2 mean to Take Things Literally, in the 2nd sense: to use the most common definitions of terms.

Then

  1. ∀x [ Kx → TTL1x ]
    For anyone who is a Kleptomaniac, Then they take things literally (steal)
  2. ∀y[ TTL2y → Py ]
    For anyone who takes things literally (definitionally), Then it is hard to explain puns to them

What we want, however, is closer to:

  1. ∀z [[ Kz → TTLz] → Pz ]
    For anyone who is a Kleptomaniac, Then they take things literally, Then it is hard to explain puns to them

with only one sense of TTL, but two meanings.

Since TTL1 ≠ TTL2, we can’t derive (3) from (1) and (2), as is. And if TTL1 = TTL2, then we would have (1) A→B, and (2) B→C, while trying to prove (3) A→B→C, which logically follows. However, there would no longer be a pun if there was only one meaning of TTL.

What is needed is to be able to recompose our understanding of ‘to take things literally’ in a situation aware way. We need to be able to have the right meaning of TTL apply at the right time, specifically Meaning 1 in the first part, and the Meaning 2 in the latter.

Intuitively, we want something like this, with the scope corresponding to the situation:

  1. ∀z [ Kz → { TTLz ]1 → Pz }2

In this formula, let the square brackets [] have the first meaning of TTL apply, while the curly braces {} use the second meaning. Only the middle — TTL — does double duty with both meanings.

Achieving this customized scope can be done by using Independence Friendly logic. IF logic allows for fine-grained scope allocation.

So let:

S mean to steal.

D mean to take things definitionally.

Then:

  1. ∀x ∀y ∃u/∀x ∃v/∀y [ Kx → ( x=u & y=v & Su & Dv → TTLvu ) → Py ]
    If anyone is a kleptomaniac then there is someone who is identical to them who steals… and if there is someone who takes things definitionally then there is someone identical to them for whom it is hard to explain puns to… and the person who steals and the person who takes things definitionally then both Take Things Literally.

The scope gymnastics are being performed by the slash operators at the start and the equality symbols in the middle part of the equation. What they are doing is specifying the correct meanings — the correct dependencies — to go with the correct senses: Stealing pairs with Kleptomania and taking things Definitionally pairs with being bad at Puns, while both pairs also meaning Taking Things Literally. With both pairs meaning TTL, and each pair being composed independently, Equation (5) therefore provides a formalization of the original pun.

Discussion

Finding new applications for existing logical systems provides a foundation for further research. As we expand the range of topics subject to logical analysis, cross-pollination between these subjects becomes possible.

For instance, using custom dependencies to associate multiple meanings to a single term is not only useful in describing puns. Scientific entities are often the subjects of competing hypotheses. The different hypotheses give different meanings — different properties, relations and dependencies — to the scientific objects under study. Logically parsing how the different hypotheses explain the world using the same terms can help us analyze the contradictions and incommeasureabilities between theories.

On the other hand, while this article may have forever ruined the above pun for you (and me), it does potentially give insight into what humans find funny. Classically, risibility, having the ability to laugh, has been associated with humans and rationality. Analyzing this philosophical tradition with the new logical techniques will hopefully provide existential insight into the human condition.

Posted in independence friendly logic, logic. Tagged with , , , .

Shaking the Tree

Life often results in situations such that no strategy suggests any further moves. We just don’t know what to do next. In a game of perfect information, where each player knows all the previous moves, this can signal stalemate. Take chess: given both sides know everything that has transpired and have no reason to believe that the opponent will make a mistake, there can come a time when both sides will realize that there are no winning strategies for either player. A draw is then agreed upon.

The situation is not as simple in games of incomplete information. Let’s assume some information is private, that some moves in the game are only known to a limited number of players. For instance, imagine you take over a game of chess in the middle of a match. The previous moves would be known to your opponent and the absent player, but not to you. Hence you do not know the strategies used to arrive at that point in the game, and **your opponent knows that you do not know**.

Assume we are in a some such situation where we do not know all the previous moves and have no further strategic moves to make. This is to say we are waiting, idling, or otherwise biding our time until something of significance happens. Formally we are at an equilibrium.

A strategy to get out of this equilibrium is to “shake the tree” to see what “falls out”. This involves making information public that was thought to be private. For instance, say you knew a damaging secret to someone in power and that person thought they had successfully hidden said secret. By making that person believe that the secret was public knowledge, this could cause them to act in a way they would not otherwise, breaking the equilibrium.

How, though, to represent this formally? The move made in shaking the tree is to make information public that was believed to be private. To represent this in logic we need a mechanism that represents public and private information. I will use the forward slash notation of Independence Friendly Logic, /, to mean ‘depends upon’ and the back slash, , to mean ‘independent of.’

To represent private strategy Q, based on secret S, and not public to party Z we can say:

Secret Strategy) If, and only if, no one other than Y depends upon the Secret, then use Strategy Q
(∀YS) (∃z/S) ~(Y = z) ⇔ Q

To initial ‘shaking the tree’ would be to introduce a new dependency:

Tree Shaking) there is someone other than Y that depends on S
(∃zS) ~(Y = z)

Tree Shaking causes party Y’s to change away from Strategy Q since Strategy Q was predicated upon no one other than Y knowing the secret, S. The change in strategy means that the players are no longer idling in equilibrium, which is the goal of shaking the tree.

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , .

Cynic Argumentation

Many arguments are called ‘cynical,’ but is there anything that is common to them? Is there a general form of cynical argument?

One type of cynical argument is a kind of reductio ad absurdem, a proof by contradiction, to discredit a premise. The first step is to take the premise and associate it with some worldview.

  1. Assume P. (premise)
  2. P holds under worldviews W.  (Cynical Generalization)

Then, the cynic discredits those worldviews.

  1. Worldviews W are not the sort of views we want.    (ethical, logical or other valuation)
  2. Therefore the premise P is rejected because it leads to absurd consequences.  (Contradiction 2, 3)

What is unique here is the use of worldviews. The cynic generalizes from the premise to associated worldviews. Instead of finding something wrong with the premise itself, the cynic objects to any line of thought that leads to the premise.

Therefore, the criticism mounted here is existential: The cynic objects to people’s way of life, their existences. In doing so, the cynic changes the standards of evaluation. Though the premise may be unassailable on its own, when it is placed in the wider context of life, it no longer remains innocent or safe. By focusing the argument in this way, the premise can be seen as a symptom of affliction, an unwanted life—an absurdity.

— — — —

I find this argumentation style particularly interesting because of the Cynical Generalization step. The generalization is something like modal. However, it is not a generalization to possible worlds, but to possible lives. The cynic considers all possible lives that include affirming the premise and asks whether it is possible or desirable to live those lives.

Since we do reject different ways of life all the time that we feel are not for our selves, this argument style cannot be dismissed as flippant. Moreover, it is an extremely powerful argument: as historical cynics have shown, if you are willing to forgo the trappings of society, you are freer to reject its laws and conclusions.

Posted in argumentation, logic, philosophy. Tagged with , .

An Introduction to the Game Theoretic Semantics view of Scientific Theory

What is a scientific theory?  In an abstract sense, a scientific theory is a group of statements about the world.  For instance the Special Theory of Relativity has, “The speed of light in a vacuum is invariant,” as a core statement, among others, about the world.  This statement is scientific because, in part, it is meant to hold in a ‘law-like’ fashion: it holds across time, space and observer.

The Popperian view is that we have scientific theories and we test those theories with experiments.  This means that given a scientific theory, a set of scientific statements about phenomena, we can deductively generate predictions.  These predictions are further statements about the world.  If our experiments yield results that run counter to what the theory predicts — the experiments generate statements that contradict the predictions, the theory did not hold across time, space or observer — then the theory eventually becomes falsified.  Else the theory may be considered ‘true’ (or at least not falsified) and it lives to fight another day.

The game theoretic semantics (GTS) view is that truth is the existence of a winning strategy in a game.  In terms of the philosophy of science, this means that our theories are strategic games (of imperfect information) played between ourselves and Nature.  Each statement of a theory is a description of a certain way the world is, or could be.  An experiment is a certain set of moves — a strategy for setting up the world in a certain way — that yields predicted situations according to the statements of the theory.  If our theory is true and an experiment is run, then this means that there is no way for Nature to do anything other than yield the predicted situation.  Said slightly differently: truth of a scientific theory is knowing a guaranteed strategy for obtaining a predicted Natural outcome by performing experiments.  If the strategy is executed and the predicted situations do not obtain, then this means that Nature has found a way around our theory, our strategy.  Hence there is no guaranteed strategy for obtaining those predictions and the theory is not true.

An example:

Take Galileo’s famous experiment of dropping masses off the Tower of Pisa.  Galileo’s theory was that objects of different mass fall at equal rates, opposing the older Aristotelian view that objects of greater mass fall faster.

According to the Popperian view Galileo inferred from his theory that if he dropped the two balls of different mass off the tower at the same time, they would hit the ground at the same time.  When he executed the experiment, the balls did hit the ground at the same time, falsifying the Aristotelian theory and lending support to his theory.

The GTS view is that dropping balls of unequal mass off a tower is a strategic game setup.  This experimental game setup is an instance of a strategy to force Nature to act in a certain way, namely to have the masses hit at the same time or not.  According to Galilean theory, when we are playing this game with Nature, Nature has no choice other than to force the two masses to hit the ground at the same time.  According to Aristotelian theory, when playing this game, Nature will force the more massive ball to hit the ground first.  History has shown that every time this game is played, the two masses hit the ground at the same time.  This means that there is a strategy to force Nature to act in the same way every time, that there is a ‘winning strategy’ for obtaining this outcome in this game with Nature.  Hence the Galilean theory is true: it got a win over the Aristotelian theory.

Why you might want to consider doing things the GTS way:

GTS handles scientific practice in a relatively straightforward way.  Theories compete against Nature for results and against each other for explanatory power.  Everything is handled by the same underlying logic-game structure.

GTS is a powerful system.  It has application to  game theory, computer science, decision theory, communication and more.

If you are sympathetic to a Wittgensteinian language game view of the world, GTS is in the language game tradition.

More on GTS:

http://plato.stanford.edu/entries/logic-games/
https://en.wikipedia.org/wiki/Game_semantics

Posted in game theory, logic, philosophy, science. Tagged with , , , , .

EIFL (Domainless Logic)

I saw this post by Mark Lance over at New APPS and he brought up one of the issues that I have recently been concerned with: What is a logical domain?  He said:

So our ignorance of our domain has implications for which sentences are true.  And if a sentence is true under one interpretation and false under another, it has different meanings under them.  And if we don’t know which of these interpretations we intend, then we don’t know what we mean.

I am inclined to think that this is a really serious issue…

When we don’t know what we, ourselves, mean, I regard this as THE_PHILOSOPHICAL_BAD, the place you never want to be in, the position where you can’t even speak.  Any issue that generates this sort of problem I regard as a Major Problem of Philosophy — philosophy in general, not just of its particular subject.

A little over a year ago I was trying to integrate probability and logic in a new way.  I developed indexed domains in order that different quantifications ranged over different values.  But then I said:

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

I used the dependence slash to indicate the exact domain that a specific quantification ranged over.  This localized the domain to the quantifier.  About a week after publishing this I realized that the structure of this pseudo-domain ought to be logically structured: (Heads, Tails) became (Heads OR Tails).  The logical or mathematical domain, as an independent structure, can therefore be completely done away with.  Instead a pseudo-domain must be specified by a set of logical or mathematical statements given by a dependence (or independence) relation attached to every quantifier.

For example:

∀x/((a or b or c) & (p → q))…

This means that instantiating x depends upon the individuals a, b or c, that is, x can only be a, b or c, and it also can only be instantiated if (p → q) already has a truth value.  If  ((p → q) → d) was in the pseudo-domain, then x could be instantiated to d if (p → q) was true; if ¬d was implied, then it would be impossible to instantiate x to d, even if d was implied in some other part of the pseudo-domain.  Hence the pseudo-domain is the result of a logical process.

The benefit of this approach is that it better represents the changing state of epistemic access that a logical game player has at different times.  You can have a general domain for things that exist across all game players and times that would be added to all the quantifier dependencies (Platonism, if you will), but localized pseudo-domains for how the situation changes relative to each individual quantification.

Moreover, the domain has become part of the logical argument structure and does not have an independent existence, meaning fewer ontological denizens.  And, to answer the main question of this post, every domain is completely specified, both in content and structure.

I’m inclined to call this logic Domainless Independence Friendly logic, or DIF logic, but I really also like EIFL, like the French Tower: Epistemic Independence Friendly Logic.  Calling this logic epistemic emphasizes the relative epistemic access each player has during the logical game that comes with the elimination of the logical domain.

Posted in epistemology, game theory, independence friendly logic, logic, philosophy.

The Paradox of Unreasonability

“You’re being unreasonable!”

One or more of you may have had this directed at you. But what does the speaker mean by it?

Presumably the speaker believes that the listener is not acting according to some given standard. However, if the speaker had an argument to that effect, the speaker should’ve presented it. Hence, all the above statement means is that the speaker has run out of arguments and has resorted to name-calling: being unreasonable is another way of saying crazy.

Now, though, the situation has reversed itself. It is not the listener that has acted unreasonably, but the speaker. Without an argument that concludes that the listener is being unreasonable, then it is not the listener that is being unreasonable, but the speaker. The speaker is name-calling, when, by the speaker’s own standards, an argument is required. For what else is reasonable but to present an argument? So, by saying that the listener is being unreasonable, in essence the speaker is declaring themself unreasonable.

But, yet again, the situation reverses itself. If a person has run out of arguments, and makes a statement to that effect, then he or she is being perfectly reasonable. This returns us to the beginning! Therefore, by making a claim about someone else being unreasonable, you paradoxically show that you yourself are and are not reasonable, such that if you are, then you are not, and if you are not, then you are.

Posted in argumentation, logic, philosophy. Tagged with , .

Яandom Logic

If we try to represent tossing a coin or a die, or picking a card out of a deck at random, in logic, how should we do it?

Tossing a coin might look like:

Toss(coin) → (Heads or Tails)

Tossing a die might be:

Toss(die) → (1 or 2 or 3 or 4 or 5 or 6)

Picking a card:

Pick(52 card deck) → (1♣ or 2♣ or … or k♥)

This begs asking, do these statements make sense? For instance look what happens if we try to abstract:

∀x Toss(x)

such that ‘Toss’ represents a random selection of the given object.

But this is weird because Toss is a randomized function and x is not selected randomly in this formula. Perhaps if we added another variable, we could generate the right sort of function:

∀y ∃x Toss(yx)

Then x would be a function of y: we would select x with respect to y. The problem is still that a Toss involves randomness. So this setup is incorrect because treating x as a function of y is not randomized, because y is not random.

How can we represent randomness in logic?

As noted, functions alone will not work. Variables and interpreted objects cannot invoke randomness. Perhaps we can modify some part of our logic to accommodate randomness. The connectives for negation and conjunction haven’t anything to do with randomness either.

But, if we use the game theoretic interpretation of logic, then we can conceive of each quantifier as representing a player in a game. Players can be thought of as acting irrationally or randomly.

Therefore, let’s introduce a new quantifier: Я. Я is like the other quantifiers in that it instantiates a variable.

  1. Яx T(x)
  2. Tb

However, Я is out of our (or anyone’s) control. It does instantiate variables when it is it’s turn (just like other quantifiers) but it instantiates randomly. So we have three players, Abelard, Eloise and Random (or the Verifier, Falsifier and Randomizer).

But more is still needed. We need a random selection between specific options, be it between heads and tails, 1-6, cards, numbers, or anything else. One way of doing this would be to create a special domain just for the random choices. Я would only instantiate from this domain, and if there are multiple random selections, we will require multiple indexed domains.

Hence, given Di(Heads, Tails),
Яix
represents a coin flip since Я randomly instantiates out of the domain containing only Heads and Tails.

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

Now that we have an instantiation rule for Я we also need a negation rule for it. If some object is not selected at random, then it must have been individually selected. In this case the only other players that could have selected the object are ∀ and ∃. Hence the negation rule for Я is just like the negation rule for the other quantifiers: negating a quantifier means that a different player is responsible for instantiation of the variable. If neither player is responsible, it can be considered random: ¬Яx ↔ (∀x or ∃x). We can leave the basic negation rule for ∀ and ∃ the way it is.

Therefore, given the additions of the new quantifier and domain (or slash notation), we can represent randomness within logic.

———

See “Propositional Logics for Three” by Tulenheimo and Venema in Dialogues, Logics And Other Strange Things by Cedric Degremont (Editor) College Publications 2008, for a generalized framework for logics with 3 quantifiers. Since the above logic requires either indexed domains or dependence operators, Яandom Logic is a bit different, but it is a good discussion.

Posted in game theory, logic, science. Tagged with , , , , .

IF Logic and Cogito Ergo Sum

(∃x∃x) → ∃x

Descartes Law

If something has informational dependence upon itself, then that thing exists.  For example, thinking that you are thinking is informationally self dependent and therefore a thinking thing (you) exists.

Posted in epistemology, independence friendly logic, logic. Tagged with , .

New Quantifier Angle-I, and Agent Logic

I was thinking that upside down A and backwards E were feeling lonely.  Yes, ∀ and ∃ love each other very much, but they could really use a new friend.  Introducing Angle I:

Now, Angle I, Angle I small, is just like her friends ∀ and ∃.  She can be used in a formula such as ∀x∃yAngle I smallz(Pxyz).

But how should we understand what is going on with the failure of the quantified tertium non datur?  With that advent of a third quantifier, what’s to stop us from having a fourth, fifth or n quantifiers?

The Fregean tradition of quantifiers states that the upside down A means ‘for any” and the backwards E mean ‘there exists some’.  So ‘∀x∃yPxy’ means ‘for any x, there exists some y, such that x and y are related by property P’.  For instance we could say that for any rational number x there exists some other rational number y such that y=x/2.

If we, however, follow closer to the game-theoretic tradition of logic, then the quantifiers no longer need take on their traditional role.  The two quantifiers act like players in a game, in which the object is to make the total statement true or false.  In our above example, we would say that backwards E would win the game, because no matter what number upside down A picks, there is always some number that ∃ could find that is twice the number ∀ chose.

Under this view of quantifiers, quantifiers acting as players in a game, there is no reason why there can’t be any number of players.  (Personally, I like the idea of continuing down the list of vowels: upside-down A, backwards E, angle I, then inverted O, O, maybe angle U? Go historical with Abelard, hEloise, and then Fulbert? Suggestions?)

Now, what is it good for? Let’s play a game of Agent Logic!

The purpose of a game of Agent Logic is to determine the loyalties of the agents in that game, i.e. discover any secret agents. A game consists of a particular logical situation, as given by formulae of independence friendly logic, with at least three different agents, each of which is represented by a quantifier: ∀, ∃, angle I, inverted O, etc. Each agent has an associated ‘domain’, and for the game to be non-trivial the intersection of the domains must have at least one element.

A game of Agent Logic is played by determining the information dependencies required to derive the target formulae from the premise formulae. Once the required information dependencies are known, then the strategies and loyalties of the agents have used may be inferred. The simplest solution to a game is one in which an information dependence indicates a loyalty: if an agent has access to certain information, then that agent must have a specific loyalty.

The person running the game is the Intelligence Director, given by the quantifier angle-I. This is you! All other agents are possible opposing Intelligence Directors or secret agents of the opposing Intelligence Directors. It is your job to figure out who has given who access to information and how that agent has acted upon it. Any information or strategy that is not derivable from the premises are considered acts of treason against you, the Intelligence Director. If the target premise (conclusion) is derivable from the premises alone, no determination of loyalty can be made.

The ‘domain’ of angle-I consists of what you depend upon, i.e. what you believe to exist and what you believe the other agent’s believe to exist. (Though it is a premise itself.)  Recall that the backslash, , means ‘is dependent upon’ and the forward slash, /, means ‘is independent of’.

premise:

1. Angle I small (
∀ (a, b, c),
∀/∃,
∃ (a, b, c, d),
a, b, c, d
)

In this ‘domain’ of angle-I, the Intelligence Director is dependent upon ∀ depending upon the existence of a, b and c, and being independent of  ∃, that ∃ depends on the existence of a, b, c and d, and the director herself depends upon the existence of a, b, c, and d.

premise:

2. ∀xPx

target (conclusion):

3. Pd

Now, since angle-I depends upon ∀ not depending upon d, there is no way to derive the target from the premises. However, since ∃ does depend upon d, if ∀ depends upon ∃, then agent ∀ has access to d.

Therefore, given treason,

4. ∀ (∃(d))               [premise of treason – ∀ receives information from ∃, specifically d ]

5. Pd                                  [instantiation from 2, 4]

This shows that the conclusion can be reached if ∀ is treasonous, a secret agent of ∃, i.e. ∀ is loyal to ∃ and not angle-I.

Posted in Frege, game theory, independence friendly logic, logic, philosophy.

Rock Paper Scissors

Rock Paper Scissors is a game in which 2 players each choose one of three options: either rock, paper or scissors.  Then the players simultaneously reveal their choices.  Rock beats scissors but loses to paper (rock smashes scissors); Paper beats rock and loses to scissors (paper covers rock); Scissors beats paper but loses to rock (scissors cut paper).  This cyclical payoff scheme (Rock > Scissors, Scissors > Paper, Paper > Rock) can be represented by this rubric:

Child 2
rock paper scissors
Child 1 rock 0,0 -1,1 1,-1
paper 1,-1 0,0 -1,1
scissors -1,1 1,-1 0,0
.
(ref: Shor, Mikhael, “Rock Paper Scissors,” Dictionary of Game Theory Terms, Game Theory .net,  <http://www.gametheory.net/dictionary/Games/RockPaperScissors.html>  Web accessed: 22 September 2010)

However, if we want to describe the game of Rock Paper Scissors – not just the payoff scheme – how are we to do it?

Ordinary logics have no mechanism for representing simultaneous play.  Therefore Rock Paper Scissors is problematic because there is no way to codify the simultaneous revelation of the players’ choices.

However, let’s treat the simultaneous revelation of the players’ choices as a device to prevent one player from knowing the choice of the other.  If one player were to know the choice of the other, then that player would always have a winning strategy by selecting the option that beats the opponent’s selection.  For example, if Player 1 knew (with absolute certainty) that Player 2 was going to play rock, then Player 1 would play paper, and similarly for the other options.  Since certain knowledge of the opponent’s play trivializes and ruins the game, it is this knowledge that must be prevented.

Knowledge – or lack thereof – of moves can be represented within certain logics.  Ordinarily all previous moves within logic are known, but if we declare certain moves to be independent from others, then those moves can be treated as unknown.  This can be done in Independence Friendly Logic, which allows for explicit dependence relations to be stated.

So, let’s assume our 2 players, Abelard (∀) and Eloise (∃) each decide which of the three options he or she will play out of the Domain {r, p, s} .  These decisions are made without knowledge of what the other has chosen, i.e. independently of each other.

∀x ∃y/∀x

This means that Abelard chooses a value for x first and then Eloise chooses a value for y.  The /∀x next to y means that the choice of y is made independently from, without knowledge of the value of, x.

R-P-S: ∀x ∃y/∀x (Vxy)

The decisions are then evaluated according to V, which is some encoding of the above rubric like this:

V: x=y → R-P-S &
x=r & y=s → T &
x=r & y=p → F &
x=p & y=r → T &
x=p & y=s → F &
x=s & y=p → T &
x=s & y=r → F

T means Abelard wins; F means Eloise wins.  R-P-S means play more Rock Paper Scissors!

Johan van Benthem, Sujata Ghosh and Fenrong Liu put together a sophisticated and generalized logic for concurrent action:
http://www.illc.uva.nl/Publications/ResearchReports/PP-2007-26.text-Jun-2007.pdf

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , , .