# 11.05.12

## EIFL (Domainless Logic)

Posted in epistemology, game theory, independence friendly logic, logic, philosophy at 9:57 pm by nogre

I saw this post by Mark Lance over at New APPS and he brought up one of the issues that I have recently been concerned with: What is a logical domain?  He said:

So our ignorance of our domain has implications for which sentences are true.  And if a sentence is true under one interpretation and false under another, it has different meanings under them.  And if we don’t know which of these interpretations we intend, then we don’t know what we mean.

I am inclined to think that this is a really serious issue…

When we don’t know what we, ourselves, mean, I regard this as THE_PHILOSOPHICAL_BAD, the place you never want to be in, the position where you can’t even speak.  Any issue that generates this sort of problem I regard as a Major Problem of Philosophy — philosophy in general, not just of its particular subject.

A little over a year ago I was trying to integrate probability and logic in a new way.  I developed indexed domains in order that different quantifications ranged over different values.  But then I said:

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

I used the dependence slash to indicate the exact domain that a specific quantification ranged over.  This localized the domain to the quantifier.  About a week after publishing this I realized that the structure of this pseudo-domain ought to be logically structured: (Heads, Tails) became (Heads OR Tails).  The logical or mathematical domain, as an independent structure, can therefore be completely done away with.  Instead a pseudo-domain must be specified by a set of logical or mathematical statements given by a dependence (or independence) relation attached to every quantifier.

For example:

∀x/((a or b or c) & (p → q))…

This means that instantiating x depends upon the individuals a, b or c, that is, x can only be a, b or c, and it also can only be instantiated if (p → q) already has a truth value.  If  ((p → q) → d) was in the pseudo-domain, then x could be instantiated to d if (p → q) was true; if ¬d was implied, then it would be impossible to instantiate x to d, even if d was implied in some other part of the pseudo-domain.  Hence the pseudo-domain is the result of a logical process.

The benefit of this approach is that it better represents the changing state of epistemic access that a logical game player has at different times.  You can have a general domain for things that exist across all game players and times that would be added to all the quantifier dependencies (Platonism, if you will), but localized pseudo-domains for how the situation changes relative to each individual quantification.

Moreover, the domain has become part of the logical argument structure and does not have an independent existence, meaning fewer ontological denizens.  And, to answer the main question of this post, every domain is completely specified, both in content and structure.

I’m inclined to call this logic Domainless Independence Friendly logic, or DIF logic, but I really also like EIFL, like the French Tower: Epistemic Independence Friendly Logic.  Calling this logic epistemic emphasizes the relative epistemic access each player has during the logical game that comes with the elimination of the logical domain.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 05.06.11

## IF Logic and Cogito Ergo Sum

Posted in epistemology, independence friendly logic, logic at 5:52 pm by nogre

(∃x\∃x) → ∃x

Descartes Law

If something has informational dependence upon itself, then that thing exists.  For example, thinking that you are thinking is informationally self dependent and therefore a thinking thing (you) exists.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 01.14.11

## New Quantifier Angle-I, and Agent Logic

Posted in Frege, game theory, independence friendly logic, logic, philosophy at 8:24 pm by nogre

I was thinking that upside down A and backwards E were feeling lonely.  Yes, ∀ and ∃ love each other very much, but they could really use a new friend.  Introducing Angle I:

Now, Angle I, , is just like her friends ∀ and ∃.  She can be used in a formula such as ∀x∃yz(Pxyz).

But how should we understand what is going on with the failure of the quantified tertium non datur?  With that advent of a third quantifier, what’s to stop us from having a fourth, fifth or n quantifiers?

The Fregean tradition of quantifiers states that the upside down A means ‘for any” and the backwards E mean ‘there exists some’.  So ‘∀x∃yPxy’ means ‘for any x, there exists some y, such that x and y are related by property P’.  For instance we could say that for any rational number x there exists some other rational number y such that y=x/2.

If we, however, follow closer to the game-theoretic tradition of logic, then the quantifiers no longer need take on their traditional role.  The two quantifiers act like players in a game, in which the object is to make the total statement true or false.  In our above example, we would say that backwards E would win the game, because no matter what number upside down A picks, there is always some number that ∃ could find that is twice the number ∀ chose.

Under this view of quantifiers, quantifiers acting as players in a game, there is no reason why there can’t be any number of players.  (Personally, I like the idea of continuing down the list of vowels: upside-down A, backwards E, angle I, then inverted O, O, maybe angle U? Go historical with Abelard, hEloise, and then Fulbert? Suggestions?)

Now, what is it good for? Let’s play a game of Agent Logic!

The purpose of a game of Agent Logic is to determine the loyalties of the agents in that game, i.e. discover any secret agents. A game consists of a particular logical situation, as given by formulae of independence friendly logic, with at least three different agents, each of which is represented by a quantifier: ∀, ∃, angle I, inverted O, etc. Each agent has an associated ‘domain’, and for the game to be non-trivial the intersection of the domains must have at least one element.

A game of Agent Logic is played by determining the information dependencies required to derive the target formulae from the premise formulae. Once the required information dependencies are known, then the strategies and loyalties of the agents have used may be inferred. The simplest solution to a game is one in which an information dependence indicates a loyalty: if an agent has access to certain information, then that agent must have a specific loyalty.

The person running the game is the Intelligence Director, given by the quantifier angle-I. This is you! All other agents are possible opposing Intelligence Directors or secret agents of the opposing Intelligence Directors. It is your job to figure out who has given who access to information and how that agent has acted upon it. Any information or strategy that is not derivable from the premises are considered acts of treason against you, the Intelligence Director. If the target premise (conclusion) is derivable from the premises alone, no determination of loyalty can be made.

The ‘domain’ of angle-I consists of what you depend upon, i.e. what you believe to exist and what you believe the other agent’s believe to exist. (Though it is a premise itself.)  Recall that the backslash, \, means ‘is dependent upon’ and the forward slash, /, means ‘is independent of’.

premise:

1. \ (
∀\ (a, b, c),
∀/∃,
∃\ (a, b, c, d),
a, b, c, d
)

In this ‘domain’ of angle-I, the Intelligence Director is dependent upon ∀ depending upon the existence of a, b and c, and being independent of  ∃, that ∃ depends on the existence of a, b, c and d, and the director herself depends upon the existence of a, b, c, and d.

premise:

2. ∀xPx

target (conclusion):

3. Pd

Now, since angle-I depends upon ∀ not depending upon d, there is no way to derive the target from the premises. However, since ∃ does depend upon d, if ∀ depends upon ∃, then agent ∀ has access to d.

Therefore, given treason,

4. ∀\ (∃\(d))               [premise of treason - ∀ receives information from ∃, specifically d ]

5. Pd                                  [instantiation from 2, 4]

This shows that the conclusion can be reached if ∀ is treasonous, a secret agent of ∃, i.e. ∀ is loyal to ∃ and not angle-I.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 09.22.10

## Rock Paper Scissors

Posted in game theory, independence friendly logic, logic, philosophy at 4:08 pm by nogre

Rock Paper Scissors is a game in which 2 players each choose one of three options: either rock, paper or scissors.  Then the players simultaneously reveal their choices.  Rock beats scissors but loses to paper (rock smashes scissors); Paper beats rock and loses to scissors (paper covers rock); Scissors beats paper but loses to rock (scissors cut paper).  This cyclical payoff scheme (Rock > Scissors, Scissors > Paper, Paper > Rock) can be represented by this rubric:

 Child 2 rock paper scissors Child 1 rock 0,0 -1,1 1,-1 paper 1,-1 0,0 -1,1 scissors -1,1 1,-1 0,0
.
(ref: Shor, Mikhael, “Rock Paper Scissors,” Dictionary of Game Theory Terms, Game Theory .net,  <http://www.gametheory.net/dictionary/Games/RockPaperScissors.html>  Web accessed: 22 September 2010)

However, if we want to describe the game of Rock Paper Scissors – not just the payoff scheme – how are we to do it?

Ordinary logics have no mechanism for representing simultaneous play.  Therefore Rock Paper Scissors is problematic because there is no way to codify the simultaneous revelation of the players’ choices.

However, let’s treat the simultaneous revelation of the players’ choices as a device to prevent one player from knowing the choice of the other.  If one player were to know the choice of the other, then that player would always have a winning strategy by selecting the option that beats the opponent’s selection.  For example, if Player 1 knew (with absolute certainty) that Player 2 was going to play rock, then Player 1 would play paper, and similarly for the other options.  Since certain knowledge of the opponent’s play trivializes and ruins the game, it is this knowledge that must be prevented.

Knowledge – or lack thereof – of moves can be represented within certain logics.  Ordinarily all previous moves within logic are known, but if we declare certain moves to be independent from others, then those moves can be treated as unknown.  This can be done in Independence Friendly Logic, which allows for explicit dependence relations to be stated.

So, let’s assume our 2 players, Abelard (∀) and Eloise (∃) each decide which of the three options he or she will play out of the Domain {r, p, s} .  These decisions are made without knowledge of what the other has chosen, i.e. independently of each other.

∀x ∃y/∀x

This means that Abelard chooses a value for x first and then Eloise chooses a value for y.  The /∀x next to y means that the choice of y is made independently from, without knowledge of the value of, x.

R-P-S: ∀x ∃y/∀x (Vxy)

The decisions are then evaluated according to V, which is some encoding of the above rubric like this:

 V: x=y → R-P-S & x=r & y=s → T & x=r & y=p → F & x=p & y=r → T & x=p & y=s → F & x=s & y=p → T & x=s & y=r → F

T means Abelard wins; F means Eloise wins.  R-P-S means play more Rock Paper Scissors!

Johan van Benthem, Sujata Ghosh and Fenrong Liu put together a sophisticated and generalized logic for concurrent action:
http://www.illc.uva.nl/Publications/ResearchReports/PP-2007-26.text-Jun-2007.pdf

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 05.04.10

## Revision and Hypothesis Introduction

Posted in game theory, independence friendly logic, logic, philosophy, science at 5:51 pm by nogre

Say we have some theory that we represent with a formula of logic.  In part it looks like this:

[1] …(∃z) … Pz …

This says that at some point in the theory there is some object z that has property P.

After much hard work, we discover that the object z with property P can be described as the combination of two more fundamental objects w and v with properties R and S:

[2] …(∃z) … Pz … ⇒ …(∃w)(∃v) … (Rw & Sv)…

Now lets say that in our theory, any object that had property P depended upon some other objects, x and y:

[3] …(∀x)(∀y)…(∃z) … Pz …

In our revised theory we know that objects w and v must somehow depend upon x and y, but there are many more possible dependence patterns that two different objects can have as compared to z alone.  Both w and v could depend upon x and y:

[4] …(∀x)(∀y)…(∃w)(∃v) … (Rw & Sv)…

However, let’s say that w depends on x but not y, and v depends on y but not x.  Depending on the rest of the formula, it may be possible to rejigger the order of the quantifiers to reflect this, but maybe not.  If we allow ourselves to declare dependencies and independencies, arbitrary patterns of dependence can be handled.  The forward slash means to ignore the dependency of the listed quantified variable:

[5] …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…

Besides the convenience and being able to represent arbitrary dependence structures, I think there is another benefit for this use of the slash notation:  theoretical continuity.  In formula [2] above, there is a double right arrow which I used to represent the change from z to w and v, and P to R and S.  However, I created this use of the double right arrow for this specific purpose;  there is no way within normal logic to represent such a change.  That is, there is no method to get from formula [3] to formula [4] or [5], even though there is supposed to be some sort of continuity between these formulas.

Insofar as the slash notation from Independence Friendly Logic allows us to drop in new quantified variables without restructuring the rest of the formula, we can use this process as a logical move like modus ponens (though, perhaps, not as truth preserving).  Tentatively I’ll call it ‘Hypothesis Introduction’:

[6]

1. …(∀x)(∀y)…(∃z) … Pz …
2. …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…      (HI [1])

The move from line one to line two changes the formula while providing a similar sort of continuity as used in deduction.

One potential application of this would be to Ramsey Sentences.  With the addition of Hypothesis Introduction, we can generalize the Ramsey Sentence into, if you will, a Ramsey Lineage, which would chart the changes of one Ramsey Sentence to another, one theory to another.

A second application, and what got me thinking about this in the first place, was to game theory.  When playing a game against an opponent, it is mostly best to assume that they are rational.  What happens when the opponent does something apparently irrational?  Either you can play as if they are irrational or you can ignore it and continue to play as if they hadn’t made such a move.  By using Hypothesis Introduction to introduce a revision into the game structure, however, you can create a scenario that might reflect an alternate game that your opponent might be playing.  In this way you can maintain your opponent’s rationality and explain the apparently irrational move as a rational move in a different game that is similar to the one you are playing.  This alternate game could be treated as a branch off the original.  The question would then be to discover who is playing the ‘real’ game – a question of information and research, not rationality.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 05.22.09

## The Non-Reducibility & Scientific Explanation Problem

Posted in biology, epistemology, evolution, independence friendly logic, ontology, philosophy, physics, science at 9:23 pm by nogre

Q: What is a multiple star system?

A: More than one star in a non-reducible mutual relationship spinning around each other.

Q: How did it begin?

A: Well, I guess, the stars were out in space and at some point they became close in proximity.  Then their gravitations caused each other to alter their course and become intertwined.

Q: How did the gravitations cause the courses of the stars to become intertwined?  Gravity does one thing: it changes the shape of space-time; it does not intertwine things.

A: That seems right.  It is not only the gravities that cause this to happen.  It is both the trajectory and mass (gravity) of the stars in relation to each other that caused them to form a multiple star system.

Q: Saying that it is both the trajectories and the masses in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

Here is the problem: If you have a non-reducible relation (e.g., a 3-body problem or a logical mutual interdependence) then you cannot explain how it came to exist.  Explaining such things would mean that the relation was reducible.  But being unable to explain some scientific phenomenon violates the principle of science: we should be able to explain physical phenomenon.  Then the relation must not be non-reducible or it must have been a preexisting condition going all the way back to the origin of the universe.  Either you have a contradiction or it is unexplainable by definition.

What can we do?  You can hold out for a solution to the 3-body-problem or, alternatively, you can change what counts as explanation.  The latter option is the way to go, though, I am not going into this now.

For now I just want to illustrate that this problem of non-reducibility and explanation is pervasive:

Q: What is a biological symbiotic relationship?

A: More than one organism living in a non-reducible relationship together.

Q: How did it begin?

A: Well, I guess, the organisms were out in nature and at some point they became close in proximity.  Then their features caused each other to alter their evolution and become intertwined.

Q: How did the features cause the courses of their evolution to become intertwined?  Physical features do one thing: they enable an organism to reproduce; they do not intertwine things.

A: That seems right.  It is not only the features that cause this to happen.  It is both the ecosystem and the features of the organisms in relation to each other that caused them to form a symbiosis.

Q: Saying that it is both the place the organisms are living in and their features in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

As you can see, I am drawing a parallel between a multiple body problem and multiple organisms that live together.  Like the star example above, there is no way to explain the origins of organisms living together.  Even in the most basic case it is impossible.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 08.18.08

## Where Does Probability Come From? (and randomness to boot)

Posted in biology, epistemology, evolution, fitness, independence friendly logic, logic, measurement, mind, philosophy, physics, Relativity, science, Special Relativity, technology at 1:26 pm by nogre

I just returned from a cruise to Alaska. It is a wonderful, beautiful place. I zip-lined in a rain forest canopy, hiked above a glacier, kayaked coastal Canada and was pulled by sled-dogs. Anywho, as on many cruises, there was a casino, which is an excellent excuse for me to discuss probability.

What is probability and where does it come from? Definitions are easy enough to find. Google returns:

a measure of how likely it is that some event will occur; a number expressing the ratio of favorable cases to the whole number of cases possible …

So it’s a measure of likelihood. What’s likelihood? Google returns:

The probability of a specified outcome.

Awesome. So ‘probability as likelihood’ is non-explanatory. What about this ‘ratio of favorable cases to the whole number of cases possible’? I’m pretty wary about the word favorable. Let’s modify this definition to read:

a number expressing the ratio of certain cases to the whole number of cases possible.

Nor do I like ‘a number expressing…’ This refers to a particular probability, not probability at large, so let’s go back to using ‘measure’:

a measure of certain cases to the whole number of cases possible.

We need to be a bit more explicit about what we are measuring:

a measure of the frequency of certain cases to the whole number of cases possible.

OK. I think this isn’t that bad. When we flip a fair coin the probability is the frequency of landing on heads compared to the total cases possible, heads + tails, so 1 out of 2. Pretty good.

But notice the addition of the word fair. Where did it come from, what’s it doing there? Something is said to be fair if that thing shows no favoritism to any person or process. In terms of things that act randomly, this means that the thing acts in a consistently random way. Being consistently random means it is always random, not sometimes random and other times not random. This means that fairness has to do with the distribution of the instances of the cases we are studying. What governs this distribution?

In the case of of a coin, the shape of the coin and the conditions under which it is measured make all the difference in the distribution of heads and tails. The two sides, heads and tails, must be distinguishable, but the coin must be flipped in a way such that no one can know which side will land facing up. The shape of the coin, even with uniform mass distribution, cannot preclude this previous condition. Therefore the source of probability is the interdependence of physical conditions (shape and motion of the coin) and an epistemic notion (independence of knowledge of which side will land up). When the physical conditions and our knowledge of the conditions are dependent upon each other then the situation becomes probabilistic because the conditions preclude our knowing the exact outcome of the situation.

It is now time to recall that people cheat at gambling all the time. A trio of people in March 2004 used a computer and lasers to successfully predict the decaying orbit of a ball spinning on a roulette wheel (and walked out with £1.3 million). This indicates that after a certain point it is possible to predict the outcome of a coin flipping or a roulette ball spinning, so the dependence mentioned above is eventually broken. However this is only possible once the coin is flipping or the roulette ball is rolling, not before the person releases the roulette ball or flips the coin.

With the suggestion that it is the person that determines the outcome we can expand the physical-epistemic dependence to an physical-epistemic-performative one. If I know that I, nor anyone else, can predict the outcome until after I perform a task, then the knowledge of the outcome is dependent upon how I perform that task.

This makes sense because magicians and scam artists train themselves to be able to perform tasks like shuffling and dealing cards in ways that most of us think is random but are not. The rest of us believe that there is a dependence between the physical setup and the outcome that precludes knowing the results, but this is merely an illusion that is exploited.

What about instances in which special training or equipment is unavailable; can we guarantee everyone’s ability to measure the thing in question to be equal? We can: light. Anyone who can see at all sees light that is indistinguishable from the light everyone else sees: it has no haecceity.

This lack of distinguishability, lack of haecceity (thisness), is not merely a property of the photon but a physical characteristic of humans. We have no biology that can distinguish one photon from another of equivalent wavelength. To distinguish something we have to use a smaller feature of the thing to tell it apart from its compatriots. Since we cannot see anything smaller, this is impossible. Nor is there a technology that we could use to augment our abilities: for us to have a technology that would see something smaller than a photon would require us to know that the technology interacted at a deeper level with reality than photons do. But we cannot know that because we are physically limited to using the photon as our minimal measurement device. The act of sight is foundational: we cannot see anything smaller than a photon nor can anything smaller exist in our world.

The way we perceive photons will always be inherently distributed because of this too. We cannot uniquely identify a single photon, and hence we can’t come back and measure the properties of a photon we have previously studied. Therefore the best we will be able to accomplish when studying photons is to measure a group of photons and use a distribution of their properties, making photons inherently probabilistic. Since the act of seeing light is a biological feature of humans, we all have equal epistemological footing in this instance. This means that the epistemic dependence mentioned above can be ignored because it adds nothing to the current discussion. Therefore we can eliminate the epistemic notion from our above dependence, reducing it to a physical-performative interdependence.

Since it is a historical/ evolutionary accident that the photon is the smallest object we can perceive, the photon really is not fundamental to this discussion. Therefore, the interdependence of the physical properties of the smallest things we can perceive and our inherent inability to tell them apart is a source of probability in nature.

This is a source of natural randomness as well: once we know the probability of some property that we cannot measure directly, the lack of haecceity means that we will not be able to predict when we will measure an individual with said property. Therefore the order in which we measure the property will inherently be random. [Assume the contradiction: the order in which we measure the property is not random, but follows some pattern. Then there exists some underlying structure that governs the appearance of the property. However, since we are already at the limit of what can be measured, no such thing can exist. Hence the order in which we measure the property is random.]

————–

If I were Wittgenstein I might have said:

Consider a situation in which someone asks, “How much light could you see?” Perhaps a detective is asking a hostage about where he was held. But then the answer is, “I didn’t look.” —— And this would make no sense.

hmmmm…. I did really mean to get back to gambling.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 07.29.08

## Relativity as Informational Interdependence

Posted in independence friendly logic, logic, measurement, philosophy, physics, Relativity, science at 8:40 pm by nogre

Ever have the experience of sitting in traffic and believe that you are moving in reverse, only to realize a second later that you were fooled by the vehicle next to you moving forward? You were sitting still, but because you saw something moving away, you mistakenly thought you started to move in the opposite direction.

Two different senses may be at work here: your sight and your balance. Lets assume that your balance did not play any role in this little experiment (you would have been moving too slowly to feel a jolt). Your sight told you that you were moving in a certain direction (backwards) because of something you saw, say a bus pulling forward. Then you saw something other than the bus, say the ground, and you realized that your initial appraisal of the situation was incorrect.

At the point when you look away from the bus, you believe that you are moving backwards. Then when you see the ground, you believe that you are not moving backwards. You reconcile these two contradictory beliefs by deciding that it was not you who were moving backwards but the bus that was moving forwards.

What this illustrates is that objects require something other than themselves to be considered in motion. Without the ability to reference a ‘stationary’ system (the ground), it is impossible to make a determination who is moving and who is staying still.

Now imagine this situation was taking place in a very gray place. The only things visible are yourself and the bus on a gray background. Then you notice that the bus is getting smaller. There is nothing for you to use as a reference (no stars, no ground, no nothing) to decide if it is you who is moving away from the bus or if it is the bus moving away from you, or both*. The only thing you have is the information that you and the bus are moving away from each other.

I refer to the statement that you and the bus are moving away from each other as information and not a belief because it is much more certain than what I called beliefs above, namely that you were in a certain kind of motion, which quickly turned out to be questionable.

The information that you and the bus are moving away from each other is not your everyday sort of information. It would be inaccurate to reduce this statement to a conjunction (you and the bus are moving), which is incorrect, or a disjunction (you or the bus is moving) because you are only moving with regard to the bus. By claiming that either you or the bus is moving, it makes it seem that the motion of one has nothing to do with the other. The motion of you and the bus need to be mutually dependent upon each other, and a mutual interdependence is not reducible.

If we return to the everyday, we can say that you have the information that you and the bus are moving away from each other and you and the bit of ground you are on are not moving away from each other. Since the bit of ground we initially selected was arbitrary (we could have chosen anything, like another bus) it is subject to the same issues as the bus; we merely take the ground to be stationary for most purposes, but this is a pragmatic concern. Hence all determinations of motion (or non-motion) are instances of informational interdependence.

The result that relativity is part of a larger class of mutually interdependent structures is non-trivial. Minimally this formalism will allow us to specify exactly when the use of relativity is warranted, but more importantly it will allow us to identify and provide insight into other situations of informational interdependence. Cases of mutual interdependence are relatively rare as far instances of logic go (they can’t even be described in first order logic) and having such a well studied example gives us a head start on this phenomenon.

—————————————-
* or if the bus is shrinking, or you are growing, or all of the above, but lets assume no Alice in Wonderland scenarios.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 05.29.08

## Monty Hall Update

Posted in game theory, independence friendly logic, logic, philosophy, science at 11:41 am by nogre

I wrote out an example playing of the Monty Hall Problem in Independence Friendly Logic as a game of incomplete information and appended it to my post here.

I also left an extended comment on Dependence Logic vs. Independence Friendly Logic about some of the tribulations encountered as a non-academic trying to get my grubby little hands on obscure logic papers.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark

# 04.26.08

## Dependence Logic vs. Independence Friendly Logic

Posted in fun, game theory, independence friendly logic, internet, logic, philosophy, Relativity at 2:59 pm by nogre

I picked up Dependence Logic: A New Approach to Independence Friendly Logic by Jouko Väänänen. I figure I’ll write up a review when I am finished with the book, but there is one chief difference between Dependence Logic and Independence Friendly Logic that needs to be mentioned.

On pages 44-47 when describing the difference between Dependence Logic and Independence Friendly Logic Väänänen says,

The backslashed quantifier,

∃xn\{xi0,…,xim-1}φ,

introduced in ref. [20], with the intuitive meaning:

“there exists xn, depending only on xi0,…,xim-1, such that φ,”

The slashed quantifier,

∃xn/{xi0,…,xim-1}φ,

used in ref. [21] has the following intuitive meaning:

“there exists xn, independently of xi0,…,xim-1, such that φ,”

which we take to mean

“there exists xn, depending only on variables other than xi0,…,xim-1, such that φ,”

The backslashed quantifier notation is part of what Väänänen calls ‘Dependence Friendly Logic’, and is equivalent to the ‘Dependence Logic’ that the rest of the book expounds. This backslash notation makes the difference between Dependence (Friendly) Logic and Independence Friendly Logic clear by showing that the former logic takes the notion of dependence to be fundamental whereas the latter takes independence to be fundamental. Väänänen takes this to be an advantage because he says that Dependence Logic avoids making

one ha[ve] to decide whether “other variable” refers to other variables actually appearing in a formula ?, or to other variables in the domain…

However, this treatment misses an important philosophical difference between Independence Friendly Logic and Dependence Logic. Dependence Logic is fundamentally based upon Wilfrid Hodges work, ‘Compositional Semantics for a language of imperfect information’ in Logic Journal of the IGPL (5:4 1997) 539-563, in which Hodges lays out a compositional semantics for languages such as Independence Friendly Logic using sets of assignments instead of individual assignments to determine satisfaction (T or F). Väänänen infers that Independence Friendly logic is just a bit unruly when it comes to specifying variables because he is working within a system that assumes sets of assignments are a useful and unproblematic way to determine satisfaction.

However the unseen problem of using sets of assignments is that something is added by assuming the domain is a set. For example, let’s take try to define a location and take the set of all the points in the universe. However, we immediately run into relativity: All locations are defined relative to each other and the people trying to figure out where things are, i.e. There is no predetermined set of all the points in the universe. The issue is that the domain of potential assignments, the objects in the universe, may be dependent upon the person or people using them (the players of the semantic game in this case). If the domain is dependent upon the players, the set cannot be constructed until after the players have begun the game. Therefore, if we postulate that the domain is a set at the outset then the players know something about the game that they are playing, namely that it does not depend upon them because it was predetermined.

Following this line of thought it seems possible to constructed a game in which the domain {Abelard, Eloise} is such that Abelard and Eloise are the actual people playing the game and the formula is ‘Someone x lost the game by instantiating this formula’ such that whoever instantiated that formula would win the game according to the rules. But then the formula would not be satisfied, so that player would have lost, but then it would be satisfied, a paradox. It is easy enough to declare that the domain must be independent of the players, but again this signals something about the game being played to the players before the formula to be is revealed.

Lastly there is something to be said about using logic to represent natural language here too: if you consider the set of all possible responses to some question, you are not ever considering all possible responses, but all the possible responses you can think of at that time. Therefore if we are using game semantics and imperfect information to represent natural language, then it is a mistake to predetermine the domain of all possible responses separate from the people involved. Again, the domain being linked to the people involved is at odds with the domain being a predetermined set.

Long story short, there is a very good reason for not always using sets of assignments to determine satisfaction. Depending on the situation, a set may offer non-trivial information about a game or misconstrue the game being played. Independence Friendly logic makes no assumptions about the type of game being played and is therefore of greater scope than logics that are based upon Hodges work. Of course one is free to use sets of assignments to determine satisfaction and derive set-theoretic results, but the compositionality gained comes at the price of limiting the types of games that can be played.

Digg it ¨ del.icio.us ¨ Sympoze ¨ Email ¨ Google ¨ reddit ¨ StumbleUpon
AddThis ∀bookmark