Category Archives: epistemology

EIFL (Domainless Logic)

I saw this post by Mark Lance over at New APPS and he brought up one of the issues that I have recently been concerned with: What is a logical domain?  He said:

So our ignorance of our domain has implications for which sentences are true.  And if a sentence is true under one interpretation and false under another, it has different meanings under them.  And if we don’t know which of these interpretations we intend, then we don’t know what we mean.

I am inclined to think that this is a really serious issue…

When we don’t know what we, ourselves, mean, I regard this as THE_PHILOSOPHICAL_BAD, the place you never want to be in, the position where you can’t even speak.  Any issue that generates this sort of problem I regard as a Major Problem of Philosophy — philosophy in general, not just of its particular subject.

A little over a year ago I was trying to integrate probability and logic in a new way.  I developed indexed domains in order that different quantifications ranged over different values.  But then I said:

(aside:
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence

Яx/(Heads, Tails)

means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)

I used the dependence slash to indicate the exact domain that a specific quantification ranged over.  This localized the domain to the quantifier.  About a week after publishing this I realized that the structure of this pseudo-domain ought to be logically structured: (Heads, Tails) became (Heads OR Tails).  The logical or mathematical domain, as an independent structure, can therefore be completely done away with.  Instead a pseudo-domain must be specified by a set of logical or mathematical statements given by a dependence (or independence) relation attached to every quantifier.

For example:

∀x/((a or b or c) & (p → q))…

This means that instantiating x depends upon the individuals a, b or c, that is, x can only be a, b or c, and it also can only be instantiated if (p → q) already has a truth value.  If  ((p → q) → d) was in the pseudo-domain, then x could be instantiated to d if (p → q) was true; if ¬d was implied, then it would be impossible to instantiate x to d, even if d was implied in some other part of the pseudo-domain.  Hence the pseudo-domain is the result of a logical process.

The benefit of this approach is that it better represents the changing state of epistemic access that a logical game player has at different times.  You can have a general domain for things that exist across all game players and times that would be added to all the quantifier dependencies (Platonism, if you will), but localized pseudo-domains for how the situation changes relative to each individual quantification.

Moreover, the domain has become part of the logical argument structure and does not have an independent existence, meaning fewer ontological denizens.  And, to answer the main question of this post, every domain is completely specified, both in content and structure.

I’m inclined to call this logic Domainless Independence Friendly logic, or DIF logic, but I really also like EIFL, like the French Tower: Epistemic Independence Friendly Logic.  Calling this logic epistemic emphasizes the relative epistemic access each player has during the logical game that comes with the elimination of the logical domain.

Posted in epistemology, game theory, independence friendly logic, logic, philosophy.

IF Logic and Cogito Ergo Sum

(∃x∃x) → ∃x

Descartes Law

If something has informational dependence upon itself, then that thing exists.  For example, thinking that you are thinking is informationally self dependent and therefore a thinking thing (you) exists.

Posted in epistemology, independence friendly logic, logic. Tagged with , .

Rewrite of Evolution

New theory of evolution!  Hooray!

Patched a bunch of things together to make a nice story.  Fixed the little issue about fitness being circular.  Expanded natural selection to apply more generally.  Causal structure.  Epistemological foundations.  ooOoOO0Ooooooo.

And it’s good fun.  I swear.  Epistemology, history of physics, evolution… makes me happy.  You should really read it.

Download here. [pdf, 304kb]

Posted in biology, epistemology, evolution, fitness, General Relativity, measurement, philosophy, physics, Relativity, science. Tagged with , , , , , , , .

Against Physics as Ontologically Basic

1.  Biology is epistemically independent of physics:

Let’s assume that biology is not epistemically independent of physics, i.e. to know any biology we must first know something about physics.  However, consider evolution as determined by natural selection and the struggle for survival.  We can know about the struggle for survival and natural selection without appealing to physics — just as Darwin did when he created the theory — and hence we can fundamentally understand at least some, if not most, of biology independent of physics.

2.  Physics supervenes on biology:

Whatever ability we have to comprehend is an evolved skill.  Therefore any physical understanding of the world, as an instance of general comprehension,  supervenes on the biology of this skill.

3.  Biology is just as fundamental as physics:

If the principles involved in biology and physics are epistemically independent and each can be said to supervene on  the other, then neither has theoretical primordiality.

Therefore physics is not ontologically basic.

.

.

[This argument was inspired by a discussion over at It’s Only a Theory start by Mohan Matthen.

And I want it to be known that I HATE SUPERVENIENCE.  Basically if you use supervenience regularly then you are a BAD PERSON.  The only good argument that uses supervenience is one that reduces the overall usage of the word:  it is my hope that the above argument will prevent people from saying that biology supervenes on physics.  For every argument in which I thought that using supervenience might prove useful, I found a much, much superior argument that did not make use of the term.  I know you always live to regret statements like this, but right now I don’t care.]

Posted in argumentation, biology, epistemology, evolution, ontology, philosophy, physics, science. Tagged with , , , .

Monty Redux

In the Monty Hall Problem a contestant is given a choice between one of three doors, with a fabulous prize behind only one door. After the initial door is selected the host, Monty Hall, opens one of the other doors that does not reveal a prize. Then the contestant is given the option to switch his or her choice to the remaining door, or stick with the original selection. The question is whether it is better to stick or switch.

The answer is that it is better to switch because the probability of winning after switching is two out of three, whereas sticking with the original selection leaves the contestant with the original winning probability of one out of three. Why?

The trick to understanding why this occurs is to view the situation not from the contestant’s viewpoint, but from Monty Hall’s. At the outset, from Monty’s point of view, the contestant has a one out of three chance of guessing the correct door. In the likely situation (two out of three) that the contestant chose wrongly, Monty then has to know where the prize is among the two remaining doors in order to open a door that does not reveal the prize. So Monty opens a door not revealing the prize and asks the contestant whether he or she would like to switch or not.

However, the contestant knows that in the likely (two out of three) situation that the initial choice was wrong, Monty had to know where the prize was in order to open the door that did not contain the prize. Since the contestant knows that Monty has to know where the prize is to make the correct choice, the contestant can (in this likely case) place him or herself in Monty’s shoes. At this point Monty knows that the remaining door is the one that contains the prize, and hence the contestant should switch.

If we consider the unlikely situation in which the contestant initially chose the door with the prize behind it, then this line of reasoning will not work. Imagine that Monty forgets the location of the prize every time the contestant guesses correctly. In this situation he can still open either of the remaining doors without ever ruining the game. From his perspective the location of the prize is unrelated to his actions; it played no part in his decision to open one door or another (he merely chose a door the contestant hadn’t).

So, in the one out of three case where the contestant initially selected the correct door, there is no way to deduce whether switching is beneficial based upon placing oneself in Monty’s shoes:  the situation where Monty has forgotten the prize’s location is indistinguishable from a situation in which he has not forgotten. Without any way to further analyze the situation and tilt the odds to over one out of three, the contestant should always assume that he or she is in the previous, more likely, situation and take the opportunity to switch.1


.

1Imagine that the contestant has a guardian angel that will let the game run its course if the contestant switches doors, but will change the location of the prize such that if the contestant sticks with the original door the angel will make sure that the contestant wins four out of five times. Then the probability of winning while switching will stay at 2/3 but the probability of winning while sticking will be 4/5. If the contestant had some way of divining that this was happening, this would be a case in which further analysis would be of benefit.


File translated from TEX by TTH, version 3.79.
On 13 Aug 2009, 13:48.

Posted in epistemology, game theory, logic, philosophy. Tagged with , , .

A Rabbit in a Forest of Mushrooms

Today I was in a shop and a young mother came in with her stroller and a handbag with an image of a sleeping rabbit in a forest of mushrooms.  The rabbit had a thought bubble that read, “A rabbit in a forest of mushrooms.”

I told her I liked the bag… I don’t think she realized that it had reminded me of the last paragraph of Wittgenstein’s On Certainty:

676. “But even if in such cases I can’t be mistaken, isn’t it possible that I am drugged?” If I am and if the drug has taken away my consciousness, then I am not now really talking and thinking. I cannot seriously suppose that I am at this moment dreaming. Someone who, dreaming, says “I am dreaming”, even if he speaks audibly in doing so, is no more right than if he said in his dream “it is raining”, while it was in fact raining. Even if his dream were actually connected with the noise of the rain.

The rabbit had created a visible dream-thought bubble that had correctly identified his actual situation, though the rabbit was asleep.

Does the rabbit’s dream-thought count as justified true belief?  It may well be justified because the rabbit could be observing it’s surroundings within the dream (and those images could be connected to reality through memory), it is apparently true, and the rabbit believes it (according to the rules of thought bubble attribution).  So the dream-thought of the rabbit seems to qualify as Justified-True-Belief, but I don’t believe we normally count dream-thoughts as knowledge.

Posted in epistemology, philosophy, wittgenstein.

The Non-Reducibility & Scientific Explanation Problem

Q: What is a multiple star system?

A: More than one star in a non-reducible mutual relationship spinning around each other.

Q: How did it begin?

A: Well, I guess, the stars were out in space and at some point they became close in proximity.  Then their gravitations caused each other to alter their course and become intertwined.

Q: How did the gravitations cause the courses of the stars to become intertwined?  Gravity does one thing: it changes the shape of space-time; it does not intertwine things.

A: That seems right.  It is not only the gravities that cause this to happen.  It is both the trajectory and mass (gravity) of the stars in relation to each other that caused them to form a multiple star system.

Q: Saying that it is both the trajectories and the masses in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

Here is the problem: If you have a non-reducible relation (e.g., a 3-body problem or a logical mutual interdependence) then you cannot explain how it came to exist.  Explaining such things would mean that the relation was reducible.  But being unable to explain some scientific phenomenon violates the principle of science: we should be able to explain physical phenomenon.  Then the relation must not be non-reducible or it must have been a preexisting condition going all the way back to the origin of the universe.  Either you have a contradiction or it is unexplainable by definition.

What can we do?  You can hold out for a solution to the 3-body-problem or, alternatively, you can change what counts as explanation.  The latter option is the way to go, though, I am not going into this now.

For now I just want to illustrate that this problem of non-reducibility and explanation is pervasive:

Q: What is a biological symbiotic relationship?

A: More than one organism living in a non-reducible relationship together.

Q: How did it begin?

A: Well, I guess, the organisms were out in nature and at some point they became close in proximity.  Then their features caused each other to alter their evolution and become intertwined.

Q: How did the features cause the courses of their evolution to become intertwined?  Physical features do one thing: they enable an organism to reproduce; they do not intertwine things.

A: That seems right.  It is not only the features that cause this to happen.  It is both the ecosystem and the features of the organisms in relation to each other that caused them to form a symbiosis.

Q: Saying that it is both the place the organisms are living in and their features in relation to each other is not an answer.  That is what is in need of being explained.

A: You are asking the impossible.  I have already said that the relation is non-reducible.  I am not going to go back upon my word in order to reduce the relation into some other relation to explain it to you.  The best that can be done is to describe it as best we can.

As you can see, I am drawing a parallel between a multiple body problem and multiple organisms that live together.  Like the star example above, there is no way to explain the origins of organisms living together.  Even in the most basic case it is impossible.

Posted in biology, epistemology, evolution, independence friendly logic, ontology, philosophy, physics, science. Tagged with , , , , , .

Of Duckrabbits and Identity

Of late I’ve become increasingly concerned with the meaning of identity.  When we say, ‘x = x,’ we don’t mean that the x on the left is exactly identical to the x on the right because the x on the left is just that, on the left, and the x on the right is on the right, not the left.  Since equality would be useless without having 2 different objects (try to imagine the use of a reflexive identity symbol, i.e., one that for whatever object it is applies to, indicates that the object  is identical with itself), there is something mysterious about the use of identity.

But what is the mystery?  It cannot be anything to do with the subjects being declared identical: these objects are arbitrary to the particular topic being discussed.  For example if I say ‘the morning star = the evening star’ then we are talking about planets, and if I say that ‘3 = y’ then I am talking about numbers.  The identity sign is the same in both, even though the objects being discussed are rather different.

It is easy enough to believe that by paying attention to the different objects being declared identical we can know how to act (some sort of context principle *cringe*).  But this doesn’t address the question specifically: although we can know how to use the identity symbol in specific instances, this tells us nothing about how identity works or what it means.

Take a look at this:

drthumb = drthumb

The picture is the same save for location on the webpage.

———–

But what if we call the one on the left a duck and the one on the right a rabbit: what is different?  The features obviously don’t change, only the way we are seeing (perceiving? apprehending? looking at? interpreting?)  the two images.

(Triple bonus points to anyone who can look at the two pictures at once and see one as a duck and the other as a rabbit. Hint- it is easier for me to do it if I try to see the one on the left as a rabbit and the one on the right as a duck… focus on the mouths.)

In this example, as opposed to the others discussed above, a decision was required to be made – to see one picture one way and the other another way – before the differences even existed.  Now, in the above examples it appeared that there was a difference of knowledge: at one point we didn’t know that the evening star and morning star were one and the same, or that y was equal to 3.  This isn’t the case when looking at identical duckrabbit pictures because there is nothing about the two pictures that is different; the difference is entirely in the mind.

Let me make a suggestion about how to describe the phenomenon of being able to see one image two different ways: the image can be instantiated in two different ways, i.e. it has an associated universe with a population of two.  There are two possible descriptions associated with this image and until we make a decision about how to describe it, the image is like an uninstantiated formula.

Identity, then, is an indication that the two associated objects are things that can be generalized to the same formula.  The picture of the duck and the picture of the rabbit can be called identical because they both have a single general formula (the duckrabbit picture) that can be instantiated into either.  The identity symbol indicates that the two associated objects are two instantiations of the same general thing, be it a number, planet or image (but not objects in space-time because that would be self-contradictory… space-time and instantiation, a topic for another day).

How identity works can now be identified: it is to instantiate and generalize.  Consider the mystery of how we see the duckrabbit one way or the other: no one can tell you how you are able to see the image one way or the other.  However, you are able to instantiate the image in one way and then another, and recognize that both the duck and rabbit are shown by the same image.

Instantiation and generalization are skills and the identity symbol between the two images above indicates that you have to use that skill to generalized both to one formula.  Most of the time it is non-trivial to instantiate or generalize in order to show two things (formulas) to be equal.  In the case of the duckrabbit it is trivial because the work went into the instantiation process (to see the images one way or the other); in the other examples the situation is reversed, such that we had the instantiations but not the general formula.  In all cases, though, only when we can go back and forth between different instantiations and a single generalization do we claim two things identical.

Posted in epistemology, metaphysics, ontology, philosophy, wittgenstein. Tagged with , , , , .

Argument Structure

Basic argument structure goes like this:

  1. Premise 1
  2. Premise 2
  3. ———————–

  4. Conclusion

Knowing how to argue is great, except when someone you disagree with is proving things you don’t like.  In that case you have to know how to break your opponent’s argument or provide an argument that they cannot break.

First thing that most people do to break an argument is to attack premises (assuming no fallacies are present).  To avoid accepting your opponent’s conclusion in line 3, if you can cast doubt on the truth of premise 1, then your opponent will never get to line 3.

Personally I think this sucks.  I hate arguing about the truth of premises because many times people have no idea what the truth is and hold unbelievably stupid positions.

G. E. Moore argued that if the conclusion is more certain than the premises, then you can flip the argument:

  1. Conclusion
  2. Premise 2
  3. ———————–

  4. Premise 1

Instead of arguing about the truth of the premises, this strategy pits the premises against the conclusion by arguing that while the premises imply the conclusion, the conclusion also implies the premises.  Hence there is a question about which should be used to prove the other, and, as long as this question remains, nothing is proved.

This leads to a kind of argument holism.  An argument must first be judged on the relative certainties of its premises and conclusion before the premises can even be considered to be used to derive the conclusion.

Personally I think this is great.  It is possible to just ignore whole arguments on the grounds that the person arguing hasn’t taken into account the relative certainties involved.  If you haven’t ensured that your premises are more certain than your conclusion, then you can’t expect anyone to accept your conclusion based upon those premises.

However this leads to a nasty problem.  If all arguments are subject to this sort of holism, then arguments can be reduced to their conclusions: if the whole argument is of equal certainty, i.e. the conclusion is just as certain as a premise, then there is no reason to bother with the premises.  If we just deal with conclusions, and everyone is certain of their own conclusions, then arguing is useless.

(In practice, of course, only mostly useless.  You can (try to) undermine someone’s argument by finding something more certain and incompatible with the conclusion in question (premises are always a good place to start looking).  For better or worse, though, even when people’s premises have been destroyed, all too often they still are certain of their conclusions.)

Moreover, if everyone is certain of their conclusions, then no conclusion is any more certain than another.  If everything has equal certainty, then nothing is certain.

How to get around this problem of equal certainty?

First let me mention that this is a strictly philosophical problem: in daily life we have greater certainty in some things than we do in others.  For instance I trust certain people, and hence if they say something is true then I will be more certain of it’s truth than if someone else were to say the same thing.  So fair warning: what comes next is a philosophical solution to a philosophical problem.

If something and its opposite are equally certain, then, generally, there is nothing more that we can know about it.  For example if we know that it is either raining or not raining, then we really don’t know much about the weather.   This applies in all cases, except for paradoxes.   In a paradox something and its opposite imply each other. Hence, in a paradox, there is only one thing, not a thing and it’s negation.

Most the time paradoxes only shows us things that cannot exist.  However, if what caused the paradox was the negation of something, then we can have certainty in that thing: it’s negation cannot exist on pain of paradox.

Therefore, to provided a rock solid foundation for an argument, a paradox must be appealed to such that the paradox must have been generated from the negation of the thing to be used as a premise.

As far as I can tell, this is the only argument structure that yields absolutely certain results.  All other arguments styles are subject to questions about the truth of premises and the legitimacy of using those premises (even if true) for proving a particular conclusion.

Posted in argumentation, epistemology, logic, philosophy. Tagged with , , .

What are Quantifiers?

What are quantifiers?  Quantifiers have been thought of things that ‘range over’ a set of objects.  For example, if I say

There are people with blue eyes

this statement can be represented as (with the domain restricted to people):

∃x(Bx).

This statement says that there is at least one person with property B, blue eyes. So the ‘Ex’ is doing the work of looking at the people in the domain (all people) and picking out one with blue eyes.  Without this ‘∃x’ we would just have Bx, or x has blue eyes.

This concept of ‘ranging over’ and selecting an individual with a specific property out of the whole group works in the vast majority of applications.  However, I’ve pointed out a few instances in which it makes no sense to think of the domain as a predetermined group of objects, such as in natural language and relativistic situations.  In these cases the domain cannot be defined until something about the people involved are known, if at all; people may have a stock set of responses to questions but can also make new ones up.

So, since the problem resides with a static domain being linked to specific people, I suggest that we find a way to link quantifiers to those people.  This means that if two people are playing a logic game, each person will have their own quantifiers linked to their own domain.  The domains will be associated with the knowledge (or other relevant property) of the people playing the game.

We could index individual quantifiers to show which domain they belong to, but game theory has a mechanism for showing which player is making a move by using negation.  When a negation is reached in a logic game, it signals that it is the other player’s turn to make a move.  I suggest negation should also signal a change in domains, as to mirror the other player’s knowledge.

Using negation to switch the domain that the quantifiers reference is more realistic/ natural treatment of logic: when two people are playing a game, one may know certain things to exist that the other does not.  So using one domain is an unrealistic view of the world because it is only in special instances that two people believe the exact same objects to exist in the world.  Of course there needs to be much overlap for two people to be playing the same game, but having individual domains to represent individual intelligences makes for a more realistic model of reality.

Now that each player in a game has his or her own domain, what is the activity of the quantifier?  It still seems to be ranging over a domain, even if the domain is separate, so the problem raised above has not yet been dealt with.

Besides knowing different things, people think differently too.  The different ways people deal with situations can be described as unique strategies.  Between the strategies people have and their knowledge we have an approximate representation of a person playing a logic game.

If we now consider how quantifiers are used in logic games, whenever we encounter one we have to choose an element of the domain according to a strategy.  This strategy is a set of instructions that will yield a specified result and are separate from the domain. So quantifiers are calls to use a strategy as informed by your domain, your knowledge.  They do not ‘range over’ the domain; it is the strategies a person uses that take the domain and game (perhaps “game-state” is more accurate at this point) as inputs and returns an individual.

The main problem mentioned above can now be addressed: Instead of predetermining sets objects in domains, what we need to predetermine are the players in the game. The players may be defined by a domain of objects and strategies that will be used to play the game, but this only becomes relevant when a quantifier is reached in the game.  Specifying the players is sufficient because each brings his or her own domain and strategies to the game, so nothing is lost, and the domain and strategies do no have to be predefined because they are initially called upon within the game, not before.

I don’t expect this discussion to cause major revisions to the way people go about practicing logic, but I do hope that it provides a more natural way to think about what is going on when dealing with quantifiers and domains, especially when dealing with relativistic or natural language situations.

Posted in epistemology, game theory, logic, philosophy. Tagged with , , , , , , .