Category Archives: independence friendly logic

Punny Logic

Update 12 Feb: This post had been expanded upon and, after submission, accepted for publication in Analysis published by Oxford University Press. View the final version here.

[draft]

It is hard to explain puns to kleptomaniacs because they take things literally.

On the surface, this statement is a statement of logic, with a premise and conclusion.

Given the premise:

Kleptomaniacs take things literally.

We may deduce the conclusion:

It is hard to explain puns to kleptomaniacs.

Now, whether the conclusion strictly follows from the premise is beside the point: it is a pun, and meant to be funny. However, as a pun, it still has to make some logical sense. If it didn’t make any sense, it wouldn’t, and couldn’t, be funny either. While nonsense can be amusing, it isn’t punny.

What is the sense in which the conclusion logically follows from the premise then, and how does this relate to the pun?

Puns play off ambiguity in the meaning of a word or phrase. In this case the ambiguity has to do with the meaning of to take things literally. It can mean to steal, or it can mean to only use the simplest, most common definitions of terms.

In the first meaning, by definition, kleptomaniacs steal, i.e. they literally take things.

So then “take things literally” is true.

In the second meaning, by deduction, since puns play off multiple meanings of things, it is hard to explain a pun to someone who only uses the single, most common definition of a term. That is, if they take things literally, they won’t recognize the multiple meanings required to understand a pun.

So if someone “takes things literally” it is true that it is hard to explain puns to them.

Therefore, between the two meanings, we can informally derive the statement: it is hard to explain puns to kleptomaniacs because they take things literally.

However, if we wanted to write this out in a formal logical language, then we would need a formal way to represent the two meanings of the single phrase.

Classically, there is no way to give a proposition multiple meanings. Whatever a proposition is defined as, it stays that way. A can’t be defined as B and then not defined as B: (A=B & A≠B) is a contradiction and to be avoided classically. But let’s start with a classical formulation:

Let:

TTL1 mean to Take Things Literally, in the 1st sense: to steal

TTL2 mean to Take Things Literally, in the 2nd sense: to use the most common definitions of terms.

Then

  1. ∀x [ Kx → TTL1x ]
    For anyone who is a Kleptomaniac, Then they take things literally (steal)
  2. ∀y[ TTL2y → Py ]
    For anyone who takes things literally (definitionally), Then it is hard to explain puns to them

What we want, however, is closer to:

  1. ∀z [[ Kz → TTLz] → Pz ]
    For anyone who is a Kleptomaniac, Then they take things literally, Then it is hard to explain puns to them

with only one sense of TTL, but two meanings.

Since TTL1 ≠ TTL2, we can’t derive (3) from (1) and (2), as is. And if TTL1 = TTL2, then we would have (1) A→B, and (2) B→C, while trying to prove (3) A→B→C, which logically follows. However, there would no longer be a pun if there was only one meaning of TTL.

What is needed is to be able to recompose our understanding of ‘to take things literally’ in a situation aware way. We need to be able to have the right meaning of TTL apply at the right time, specifically Meaning 1 in the first part, and the Meaning 2 in the latter.

Intuitively, we want something like this, with the scope corresponding to the situation:

  1. ∀z [ Kz → { TTLz ]1 → Pz }2

In this formula, let the square brackets [] have the first meaning of TTL apply, while the curly braces {} use the second meaning. Only the middle — TTL — does double duty with both meanings.

Achieving this customized scope can be done by using Independence Friendly logic. IF logic allows for fine-grained scope allocation.

So let:

S mean to steal.

D mean to take things definitionally.

Then:

  1. ∀x ∀y ∃u/∀x ∃v/∀y [ Kx → ( x=u & y=v & Su & Dv → TTLvu ) → Py ]
    If anyone is a kleptomaniac then there is someone who is identical to them who steals… and if there is someone who takes things definitionally then there is someone identical to them for whom it is hard to explain puns to… and the person who steals and the person who takes things definitionally then both Take Things Literally.

The scope gymnastics are being performed by the slash operators at the start and the equality symbols in the middle part of the equation. What they are doing is specifying the correct meanings — the correct dependencies — to go with the correct senses: Stealing pairs with Kleptomania and taking things Definitionally pairs with being bad at Puns, while both pairs also meaning Taking Things Literally. With both pairs meaning TTL, and each pair being composed independently, Equation (5) therefore provides a formalization of the original pun.

Discussion

Finding new applications for existing logical systems provides a foundation for further research. As we expand the range of topics subject to logical analysis, cross-pollination between these subjects becomes possible.

For instance, using custom dependencies to associate multiple meanings to a single term is not only useful in describing puns. Scientific entities are often the subjects of competing hypotheses. The different hypotheses give different meanings — different properties, relations and dependencies — to the scientific objects under study. Logically parsing how the different hypotheses explain the world using the same terms can help us analyze the contradictions and incommeasureabilities between theories.

On the other hand, while this article may have forever ruined the above pun for you (and me), it does potentially give insight into what humans find funny. Classically, risibility, having the ability to laugh, has been associated with humans and rationality. Analyzing this philosophical tradition with the new logical techniques will hopefully provide existential insight into the human condition.

Posted in independence friendly logic, logic. Tagged with , , , .

Shaking the Tree

Life often results in situations such that no strategy suggests any further moves. We just don’t know what to do next. In a game of perfect information, where each player knows all the previous moves, this can signal stalemate. Take chess: given both sides know everything that has transpired and have no reason to believe that the opponent will make a mistake, there can come a time when both sides will realize that there are no winning strategies for either player. A draw is then agreed upon.

The situation is not as simple in games of incomplete information. Let’s assume some information is private, that some moves in the game are only known to a limited number of players. For instance, imagine you take over a game of chess in the middle of a match. The previous moves would be known to your opponent and the absent player, but not to you. Hence you do not know the strategies used to arrive at that point in the game, and **your opponent knows that you do not know**.

Assume we are in a some such situation where we do not know all the previous moves and have no further strategic moves to make. This is to say we are waiting, idling, or otherwise biding our time until something of significance happens. Formally we are at an equilibrium.

A strategy to get out of this equilibrium is to “shake the tree” to see what “falls out”. This involves making information public that was thought to be private. For instance, say you knew a damaging secret to someone in power and that person thought they had successfully hidden said secret. By making that person believe that the secret was public knowledge, this could cause them to act in a way they would not otherwise, breaking the equilibrium.

How, though, to represent this formally? The move made in shaking the tree is to make information public that was believed to be private. To represent this in logic we need a mechanism that represents public and private information. I will use the forward slash notation of Independence Friendly Logic, /, to mean ‘depends upon’ and the back slash, , to mean ‘independent of.’

To represent private strategy Q, based on secret S, and not public to party Z we can say:

Secret Strategy) If, and only if, no one other than Y depends upon the Secret, then use Strategy Q
(∀YS) (∃z/S) ~(Y = z) ⇔ Q

To initial ‘shaking the tree’ would be to introduce a new dependency:

Tree Shaking) there is someone other than Y that depends on S
(∃zS) ~(Y = z)

Tree Shaking causes party Y’s to change away from Strategy Q since Strategy Q was predicated upon no one other than Y knowing the secret, S. The change in strategy means that the players are no longer idling in equilibrium, which is the goal of shaking the tree.

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , .

EIFL (Domainless Logic)

I saw this post by Mark Lance over at New APPS and he brought up one of the issues that I have recently been concerned with: What is a logical domain?  He said:

So our ignorance of our domain has implications for which sentences are true.  And if a sentence is true under one interpretation and false under another, it has different meanings under them.  And if we don’t know which of these interpretations we intend, then we don’t know what we mean.

I am inclined to think that this is a really serious issue…

When we don’t know what we, ourselves, mean, I regard this as THE_PHILOSOPHICAL_BAD, the place you never want to be in, the position where you can’t even speak.  Any issue that generates this sort of problem I regard as a Major Problem of Philosophy — philosophy in general, not just of its particular subject.

A little over a year ago I was trying to integrate probability and logic in a new way.  I developed indexed domains in order that different quantifications ranged over different values.  But then I said:

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

I used the dependence slash to indicate the exact domain that a specific quantification ranged over.  This localized the domain to the quantifier.  About a week after publishing this I realized that the structure of this pseudo-domain ought to be logically structured: (Heads, Tails) became (Heads OR Tails).  The logical or mathematical domain, as an independent structure, can therefore be completely done away with.  Instead a pseudo-domain must be specified by a set of logical or mathematical statements given by a dependence (or independence) relation attached to every quantifier.

For example:

∀x/((a or b or c) & (p → q))…

This means that instantiating x depends upon the individuals a, b or c, that is, x can only be a, b or c, and it also can only be instantiated if (p → q) already has a truth value.  If  ((p → q) → d) was in the pseudo-domain, then x could be instantiated to d if (p → q) was true; if ¬d was implied, then it would be impossible to instantiate x to d, even if d was implied in some other part of the pseudo-domain.  Hence the pseudo-domain is the result of a logical process.

The benefit of this approach is that it better represents the changing state of epistemic access that a logical game player has at different times.  You can have a general domain for things that exist across all game players and times that would be added to all the quantifier dependencies (Platonism, if you will), but localized pseudo-domains for how the situation changes relative to each individual quantification.

Moreover, the domain has become part of the logical argument structure and does not have an independent existence, meaning fewer ontological denizens.  And, to answer the main question of this post, every domain is completely specified, both in content and structure.

I’m inclined to call this logic Domainless Independence Friendly logic, or DIF logic, but I really also like EIFL, like the French Tower: Epistemic Independence Friendly Logic.  Calling this logic epistemic emphasizes the relative epistemic access each player has during the logical game that comes with the elimination of the logical domain.

Posted in epistemology, game theory, independence friendly logic, logic, philosophy.

IF Logic and Cogito Ergo Sum

(∃x∃x) → ∃x

Descartes Law

If something has informational dependence upon itself, then that thing exists.  For example, thinking that you are thinking is informationally self dependent and therefore a thinking thing (you) exists.

Posted in epistemology, independence friendly logic, logic. Tagged with , .

New Quantifier Angle-I, and Agent Logic

I was thinking that upside down A and backwards E were feeling lonely.  Yes, ∀ and ∃ love each other very much, but they could really use a new friend.  Introducing Angle I:

Now, Angle I, Angle I small, is just like her friends ∀ and ∃.  She can be used in a formula such as ∀x∃yAngle I smallz(Pxyz).

But how should we understand what is going on with the failure of the quantified tertium non datur?  With that advent of a third quantifier, what’s to stop us from having a fourth, fifth or n quantifiers?

The Fregean tradition of quantifiers states that the upside down A means ‘for any” and the backwards E mean ‘there exists some’.  So ‘∀x∃yPxy’ means ‘for any x, there exists some y, such that x and y are related by property P’.  For instance we could say that for any rational number x there exists some other rational number y such that y=x/2.

If we, however, follow closer to the game-theoretic tradition of logic, then the quantifiers no longer need take on their traditional role.  The two quantifiers act like players in a game, in which the object is to make the total statement true or false.  In our above example, we would say that backwards E would win the game, because no matter what number upside down A picks, there is always some number that ∃ could find that is twice the number ∀ chose.

Under this view of quantifiers, quantifiers acting as players in a game, there is no reason why there can’t be any number of players.  (Personally, I like the idea of continuing down the list of vowels: upside-down A, backwards E, angle I, then inverted O, O, maybe angle U? Go historical with Abelard, hEloise, and then Fulbert? Suggestions?)

Now, what is it good for? Let’s play a game of Agent Logic!

The purpose of a game of Agent Logic is to determine the loyalties of the agents in that game, i.e. discover any secret agents. A game consists of a particular logical situation, as given by formulae of independence friendly logic, with at least three different agents, each of which is represented by a quantifier: ∀, ∃, angle I, inverted O, etc. Each agent has an associated ‘domain’, and for the game to be non-trivial the intersection of the domains must have at least one element.

A game of Agent Logic is played by determining the information dependencies required to derive the target formulae from the premise formulae. Once the required information dependencies are known, then the strategies and loyalties of the agents have used may be inferred. The simplest solution to a game is one in which an information dependence indicates a loyalty: if an agent has access to certain information, then that agent must have a specific loyalty.

The person running the game is the Intelligence Director, given by the quantifier angle-I. This is you! All other agents are possible opposing Intelligence Directors or secret agents of the opposing Intelligence Directors. It is your job to figure out who has given who access to information and how that agent has acted upon it. Any information or strategy that is not derivable from the premises are considered acts of treason against you, the Intelligence Director. If the target premise (conclusion) is derivable from the premises alone, no determination of loyalty can be made.

The ‘domain’ of angle-I consists of what you depend upon, i.e. what you believe to exist and what you believe the other agent’s believe to exist. (Though it is a premise itself.)  Recall that the backslash, , means ‘is dependent upon’ and the forward slash, /, means ‘is independent of’.

premise:

1. Angle I small (
∀ (a, b, c),
∀/∃,
∃ (a, b, c, d),
a, b, c, d
)

In this ‘domain’ of angle-I, the Intelligence Director is dependent upon ∀ depending upon the existence of a, b and c, and being independent of  ∃, that ∃ depends on the existence of a, b, c and d, and the director herself depends upon the existence of a, b, c, and d.

premise:

2. ∀xPx

target (conclusion):

3. Pd

Now, since angle-I depends upon ∀ not depending upon d, there is no way to derive the target from the premises. However, since ∃ does depend upon d, if ∀ depends upon ∃, then agent ∀ has access to d.

Therefore, given treason,

4. ∀ (∃(d))               [premise of treason – ∀ receives information from ∃, specifically d ]

5. Pd                                  [instantiation from 2, 4]

This shows that the conclusion can be reached if ∀ is treasonous, a secret agent of ∃, i.e. ∀ is loyal to ∃ and not angle-I.

Posted in Frege, game theory, independence friendly logic, logic, philosophy.

Rock Paper Scissors

Rock Paper Scissors is a game in which 2 players each choose one of three options: either rock, paper or scissors.  Then the players simultaneously reveal their choices.  Rock beats scissors but loses to paper (rock smashes scissors); Paper beats rock and loses to scissors (paper covers rock); Scissors beats paper but loses to rock (scissors cut paper).  This cyclical payoff scheme (Rock > Scissors, Scissors > Paper, Paper > Rock) can be represented by this rubric:

Child 2
rock paper scissors
Child 1 rock 0,0 -1,1 1,-1
paper 1,-1 0,0 -1,1
scissors -1,1 1,-1 0,0
.
(ref: Shor, Mikhael, “Rock Paper Scissors,” Dictionary of Game Theory Terms, Game Theory .net,  <http://www.gametheory.net/dictionary/Games/RockPaperScissors.html>  Web accessed: 22 September 2010)

However, if we want to describe the game of Rock Paper Scissors – not just the payoff scheme – how are we to do it?

Ordinary logics have no mechanism for representing simultaneous play.  Therefore Rock Paper Scissors is problematic because there is no way to codify the simultaneous revelation of the players’ choices.

However, let’s treat the simultaneous revelation of the players’ choices as a device to prevent one player from knowing the choice of the other.  If one player were to know the choice of the other, then that player would always have a winning strategy by selecting the option that beats the opponent’s selection.  For example, if Player 1 knew (with absolute certainty) that Player 2 was going to play rock, then Player 1 would play paper, and similarly for the other options.  Since certain knowledge of the opponent’s play trivializes and ruins the game, it is this knowledge that must be prevented.

Knowledge – or lack thereof – of moves can be represented within certain logics.  Ordinarily all previous moves within logic are known, but if we declare certain moves to be independent from others, then those moves can be treated as unknown.  This can be done in Independence Friendly Logic, which allows for explicit dependence relations to be stated.

So, let’s assume our 2 players, Abelard (∀) and Eloise (∃) each decide which of the three options he or she will play out of the Domain {r, p, s} .  These decisions are made without knowledge of what the other has chosen, i.e. independently of each other.

∀x ∃y/∀x

This means that Abelard chooses a value for x first and then Eloise chooses a value for y.  The /∀x next to y means that the choice of y is made independently from, without knowledge of the value of, x.

R-P-S: ∀x ∃y/∀x (Vxy)

The decisions are then evaluated according to V, which is some encoding of the above rubric like this:

V: x=y → R-P-S &
x=r & y=s → T &
x=r & y=p → F &
x=p & y=r → T &
x=p & y=s → F &
x=s & y=p → T &
x=s & y=r → F

T means Abelard wins; F means Eloise wins.  R-P-S means play more Rock Paper Scissors!

Johan van Benthem, Sujata Ghosh and Fenrong Liu put together a sophisticated and generalized logic for concurrent action:
http://www.illc.uva.nl/Publications/ResearchReports/PP-2007-26.text-Jun-2007.pdf

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , , .

Revision and Hypothesis Introduction

Say we have some theory that we represent with a formula of logic.  In part it looks like this:

[1] …(∃z) … Pz …

This says that at some point in the theory there is some object z that has property P.

After much hard work, we discover that the object z with property P can be described as the combination of two more fundamental objects w and v with properties R and S:

[2] …(∃z) … Pz … ⇒ …(∃w)(∃v) … (Rw & Sv)…

Now lets say that in our theory, any object that had property P depended upon some other objects, x and y:

[3] …(∀x)(∀y)…(∃z) … Pz …

In our revised theory we know that objects w and v must somehow depend upon x and y, but there are many more possible dependence patterns that two different objects can have as compared to z alone.  Both w and v could depend upon x and y:

[4] …(∀x)(∀y)…(∃w)(∃v) … (Rw & Sv)…

However, let’s say that w depends on x but not y, and v depends on y but not x.  Depending on the rest of the formula, it may be possible to rejigger the order of the quantifiers to reflect this, but maybe not.  If we allow ourselves to declare dependencies and independencies, arbitrary patterns of dependence can be handled.  The forward slash means to ignore the dependency of the listed quantified variable:

[5] …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…

Besides the convenience and being able to represent arbitrary dependence structures, I think there is another benefit for this use of the slash notation:  theoretical continuity.  In formula [2] above, there is a double right arrow which I used to represent the change from z to w and v, and P to R and S.  However, I created this use of the double right arrow for this specific purpose;  there is no way within normal logic to represent such a change.  That is, there is no method to get from formula [3] to formula [4] or [5], even though there is supposed to be some sort of continuity between these formulas.

Insofar as the slash notation from Independence Friendly Logic allows us to drop in new quantified variables without restructuring the rest of the formula, we can use this process as a logical move like modus ponens (though, perhaps, not as truth preserving).  Tentatively I’ll call it ‘Hypothesis Introduction’:

[6]

  1. …(∀x)(∀y)…(∃z) … Pz …
  2. …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…      (HI [1])

The move from line one to line two changes the formula while providing a similar sort of continuity as used in deduction.

One potential application of this would be to Ramsey Sentences.  With the addition of Hypothesis Introduction, we can generalize the Ramsey Sentence into, if you will, a Ramsey Lineage, which would chart the changes of one Ramsey Sentence to another, one theory to another.

A second application, and what got me thinking about this in the first place, was to game theory.  When playing a game against an opponent, it is mostly best to assume that they are rational.  What happens when the opponent does something apparently irrational?  Either you can play as if they are irrational or you can ignore it and continue to play as if they hadn’t made such a move.  By using Hypothesis Introduction to introduce a revision into the game structure, however, you can create a scenario that might reflect an alternate game that your opponent might be playing.  In this way you can maintain your opponent’s rationality and explain the apparently irrational move as a rational move in a different game that is similar to the one you are playing.  This alternate game could be treated as a branch off the original.  The question would then be to discover who is playing the ‘real’ game – a question of information and research, not rationality.

Posted in game theory, independence friendly logic, logic, philosophy, science. Tagged with , .

The Non-Reducibility & Scientific Explanation Problem

Q: What is a multiple star system?

A: More than one star in a non-reducible mutual relationship spinning around each other.

Q: How did it begin?

A: Well, I guess, the stars were out in space and at some point they became close in proximity.  Then their gravitations caused each other to alter their course and become intertwined.

Q: How did the gravitations cause the courses of the stars to become intertwined?  Gravity does one thing: it changes the shape of space-time; it does not intertwine things.

A: That seems right.  It is not only the gravities that cause this to happen.  It is both the trajectory and mass (gravity) of the stars in relation to each other that caused them to form a multiple star system.

Q: Saying that it is both the trajectories and the masses in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

Here is the problem: If you have a non-reducible relation (e.g., a 3-body problem or a logical mutual interdependence) then you cannot explain how it came to exist.  Explaining such things would mean that the relation was reducible.  But being unable to explain some scientific phenomenon violates the principle of science: we should be able to explain physical phenomenon.  Then the relation must not be non-reducible or it must have been a preexisting condition going all the way back to the origin of the universe.  Either you have a contradiction or it is unexplainable by definition.

What can we do?  You can hold out for a solution to the 3-body-problem or, alternatively, you can change what counts as explanation.  The latter option is the way to go, though, I am not going into this now.

For now I just want to illustrate that this problem of non-reducibility and explanation is pervasive:

Q: What is a biological symbiotic relationship?

A: More than one organism living in a non-reducible relationship together.

Q: How did it begin?

A: Well, I guess, the organisms were out in nature and at some point they became close in proximity.  Then their features caused each other to alter their evolution and become intertwined.

Q: How did the features cause the courses of their evolution to become intertwined?  Physical features do one thing: they enable an organism to reproduce; they do not intertwine things.

A: That seems right.  It is not only the features that cause this to happen.  It is both the ecosystem and the features of the organisms in relation to each other that caused them to form a symbiosis.

Q: Saying that it is both the place the organisms are living in and their features in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

As you can see, I am drawing a parallel between a multiple body problem and multiple organisms that live together.  Like the star example above, there is no way to explain the origins of organisms living together.  Even in the most basic case it is impossible.

Posted in biology, epistemology, evolution, independence friendly logic, ontology, philosophy, physics, science. Tagged with , , , , , .

Where Does Probability Come From? (and randomness to boot)

I just returned from a cruise to Alaska. It is a wonderful, beautiful place. I zip-lined in a rain forest canopy, hiked above a glacier, kayaked coastal Canada and was pulled by sled-dogs. Anywho, as on many cruises, there was a casino, which is an excellent excuse for me to discuss probability.

What is probability and where does it come from? Definitions are easy enough to find. Google returns:

a measure of how likely it is that some event will occur; a number expressing the ratio of favorable cases to the whole number of cases possible …

So it’s a measure of likelihood. What’s likelihood? Google returns:

The probability of a specified outcome.

Awesome. So ‘probability as likelihood’ is non-explanatory. What about this ‘ratio of favorable cases to the whole number of cases possible’? I’m pretty wary about the word favorable. Let’s modify this definition to read:

a number expressing the ratio of certain cases to the whole number of cases possible.

Nor do I like ‘a number expressing…’ This refers to a particular probability, not probability at large, so let’s go back to using ‘measure’:

a measure of certain cases to the whole number of cases possible.

We need to be a bit more explicit about what we are measuring:

a measure of the frequency of certain cases to the whole number of cases possible.

OK. I think this isn’t that bad. When we flip a fair coin the probability is the frequency of landing on heads compared to the total cases possible, heads + tails, so 1 out of 2. Pretty good.

But notice the addition of the word fair. Where did it come from, what’s it doing there? Something is said to be fair if that thing shows no favoritism to any person or process. In terms of things that act randomly, this means that the thing acts in a consistently random way. Being consistently random means it is always random, not sometimes random and other times not random. This means that fairness has to do with the distribution of the instances of the cases we are studying. What governs this distribution?

In the case of of a coin, the shape of the coin and the conditions under which it is measured make all the difference in the distribution of heads and tails. The two sides, heads and tails, must be distinguishable, but the coin must be flipped in a way such that no one can know which side will land facing up. The shape of the coin, even with uniform mass distribution, cannot preclude this previous condition. Therefore the source of probability is the interdependence of physical conditions (shape and motion of the coin) and an epistemic notion (independence of knowledge of which side will land up). When the physical conditions and our knowledge of the conditions are dependent upon each other then the situation becomes probabilistic because the conditions preclude our knowing the exact outcome of the situation.

It is now time to recall that people cheat at gambling all the time. A trio of people in March 2004 used a computer and lasers to successfully predict the decaying orbit of a ball spinning on a roulette wheel (and walked out with £1.3 million). This indicates that after a certain point it is possible to predict the outcome of a coin flipping or a roulette ball spinning, so the dependence mentioned above is eventually broken. However this is only possible once the coin is flipping or the roulette ball is rolling, not before the person releases the roulette ball or flips the coin.

With the suggestion that it is the person that determines the outcome we can expand the physical-epistemic dependence to an physical-epistemic-performative one. If I know that I, nor anyone else, can predict the outcome until after I perform a task, then the knowledge of the outcome is dependent upon how I perform that task.

This makes sense because magicians and scam artists train themselves to be able to perform tasks like shuffling and dealing cards in ways that most of us think is random but are not. The rest of us believe that there is a dependence between the physical setup and the outcome that precludes knowing the results, but this is merely an illusion that is exploited.

What about instances in which special training or equipment is unavailable; can we guarantee everyone’s ability to measure the thing in question to be equal? We can: light. Anyone who can see at all sees light that is indistinguishable from the light everyone else sees: it has no haecceity.

This lack of distinguishability, lack of haecceity (thisness), is not merely a property of the photon but a physical characteristic of humans. We have no biology that can distinguish one photon from another of equivalent wavelength. To distinguish something we have to use a smaller feature of the thing to tell it apart from its compatriots. Since we cannot see anything smaller, this is impossible. Nor is there a technology that we could use to augment our abilities: for us to have a technology that would see something smaller than a photon would require us to know that the technology interacted at a deeper level with reality than photons do. But we cannot know that because we are physically limited to using the photon as our minimal measurement device. The act of sight is foundational: we cannot see anything smaller than a photon nor can anything smaller exist in our world.

The way we perceive photons will always be inherently distributed because of this too. We cannot uniquely identify a single photon, and hence we can’t come back and measure the properties of a photon we have previously studied. Therefore the best we will be able to accomplish when studying photons is to measure a group of photons and use a distribution of their properties, making photons inherently probabilistic. Since the act of seeing light is a biological feature of humans, we all have equal epistemological footing in this instance. This means that the epistemic dependence mentioned above can be ignored because it adds nothing to the current discussion. Therefore we can eliminate the epistemic notion from our above dependence, reducing it to a physical-performative interdependence.

Since it is a historical/ evolutionary accident that the photon is the smallest object we can perceive, the photon really is not fundamental to this discussion. Therefore, the interdependence of the physical properties of the smallest things we can perceive and our inherent inability to tell them apart is a source of probability in nature.

This is a source of natural randomness as well: once we know the probability of some property that we cannot measure directly, the lack of haecceity means that we will not be able to predict when we will measure an individual with said property. Therefore the order in which we measure the property will inherently be random. [Assume the contradiction: the order in which we measure the property is not random, but follows some pattern. Then there exists some underlying structure that governs the appearance of the property. However, since we are already at the limit of what can be measured, no such thing can exist. Hence the order in which we measure the property is random.]

————–

If I were Wittgenstein I might have said:

Consider a situation in which someone asks, “How much light could you see?” Perhaps a detective is asking a hostage about where he was held. But then the answer is, “I didn’t look.” —— And this would make no sense.

hmmmm…. I did really mean to get back to gambling.

Posted in biology, epistemology, evolution, fitness, independence friendly logic, logic, measurement, mind, philosophy, physics, Relativity, science, Special Relativity, technology. Tagged with , , , , .

Relativity as Informational Interdependence

Ever have the experience of sitting in traffic and believe that you are moving in reverse, only to realize a second later that you were fooled by the vehicle next to you moving forward? You were sitting still, but because you saw something moving away, you mistakenly thought you started to move in the opposite direction.

Two different senses may be at work here: your sight and your balance. Lets assume that your balance did not play any role in this little experiment (you would have been moving too slowly to feel a jolt). Your sight told you that you were moving in a certain direction (backwards) because of something you saw, say a bus pulling forward. Then you saw something other than the bus, say the ground, and you realized that your initial appraisal of the situation was incorrect.

At the point when you look away from the bus, you believe that you are moving backwards. Then when you see the ground, you believe that you are not moving backwards. You reconcile these two contradictory beliefs by deciding that it was not you who were moving backwards but the bus that was moving forwards.

What this illustrates is that objects require something other than themselves to be considered in motion. Without the ability to reference a ‘stationary’ system (the ground), it is impossible to make a determination who is moving and who is staying still.

Now imagine this situation was taking place in a very gray place. The only things visible are yourself and the bus on a gray background. Then you notice that the bus is getting smaller. There is nothing for you to use as a reference (no stars, no ground, no nothing) to decide if it is you who is moving away from the bus or if it is the bus moving away from you, or both*. The only thing you have is the information that you and the bus are moving away from each other.

I refer to the statement that you and the bus are moving away from each other as information and not a belief because it is much more certain than what I called beliefs above, namely that you were in a certain kind of motion, which quickly turned out to be questionable.

The information that you and the bus are moving away from each other is not your everyday sort of information. It would be inaccurate to reduce this statement to a conjunction (you and the bus are moving), which is incorrect, or a disjunction (you or the bus is moving) because you are only moving with regard to the bus. By claiming that either you or the bus is moving, it makes it seem that the motion of one has nothing to do with the other. The motion of you and the bus need to be mutually dependent upon each other, and a mutual interdependence is not reducible.

If we return to the everyday, we can say that you have the information that you and the bus are moving away from each other and you and the bit of ground you are on are not moving away from each other. Since the bit of ground we initially selected was arbitrary (we could have chosen anything, like another bus) it is subject to the same issues as the bus; we merely take the ground to be stationary for most purposes, but this is a pragmatic concern. Hence all determinations of motion (or non-motion) are instances of informational interdependence.

The result that relativity is part of a larger class of mutually interdependent structures is non-trivial. Minimally this formalism will allow us to specify exactly when the use of relativity is warranted, but more importantly it will allow us to identify and provide insight into other situations of informational interdependence. Cases of mutual interdependence are relatively rare as far instances of logic go (they can’t even be described in first order logic) and having such a well studied example gives us a head start on this phenomenon.

—————————————-
* or if the bus is shrinking, or you are growing, or all of the above, but lets assume no Alice in Wonderland scenarios.

Posted in independence friendly logic, logic, measurement, philosophy, physics, Relativity, science. Tagged with , , , .