Say we have some theory that we represent with a formula of logic. In part it looks like this:

[1] …(∃z) … Pz …

This says that at some point in the theory there is some object z that has property P.

After much hard work, we discover that the object z with property P can be described as the combination of two more fundamental objects w and v with properties R and S:

[2] …(∃z) … Pz … ⇒ …(∃w)(∃v) … (Rw & Sv)…

Now lets say that in our theory, any object that had property P depended upon some other objects, x and y:

[3] …(∀x)(∀y)…(∃z) … Pz …

In our revised theory we know that objects w and v must somehow depend upon x and y, but there are many more possible dependence patterns that two different objects can have as compared to z alone. Both w and v could depend upon x and y:

[4] …(∀x)(∀y)…(∃w)(∃v) … (Rw & Sv)…

However, let’s say that w depends on x but not y, and v depends on y but not x. Depending on the rest of the formula, it may be possible to rejigger the order of the quantifiers to reflect this, but maybe not. If we allow ourselves to declare dependencies and independencies, arbitrary patterns of dependence can be handled. The forward slash means to ignore the dependency of the listed quantified variable:

[5] …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)…

Besides the convenience and being able to represent arbitrary dependence structures, I think there is another benefit for this use of the slash notation: theoretical continuity. In formula [2] above, there is a double right arrow which I used to represent the change from z to w and v, and P to R and S. However, I created this use of the double right arrow for this specific purpose; there is no way within normal logic to represent such a change. That is, there is no method to get from formula [3] to formula [4] or [5], even though there is supposed to be some sort of continuity between these formulas.

Insofar as the slash notation from Independence Friendly Logic allows us to drop in new quantified variables without restructuring the rest of the formula, we can use this process as a logical move like modus ponens (though, perhaps, not as truth preserving). Tentatively I’ll call it ‘Hypothesis Introduction’:

[6]

- …(∀x)(∀y)…(∃z) … Pz …
- …(∀x)(∀y)…(∃w/∀y) (∃v/∀x) … (Rw & Sv)… (HI [1])

The move from line one to line two changes the formula while providing a similar sort of continuity as used in deduction.

One potential application of this would be to Ramsey Sentences. With the addition of Hypothesis Introduction, we can generalize the Ramsey Sentence into, if you will, a Ramsey Lineage, which would chart the changes of one Ramsey Sentence to another, one theory to another.

A second application, and what got me thinking about this in the first place, was to game theory. When playing a game against an opponent, it is mostly best to assume that they are rational. What happens when the opponent does something apparently irrational? Either you can play as if they are irrational or you can ignore it and continue to play as if they hadn’t made such a move. By using Hypothesis Introduction to introduce a revision into the game structure, however, you can create a scenario that might reflect an alternate game that your opponent might be playing. In this way you can maintain your opponent’s rationality and explain the apparently irrational move as a rational move in a different game that is similar to the one you are playing. This alternate game could be treated as a branch off the original. The question would then be to discover who is playing the ‘real’ game – a question of information and research, not rationality.