Tag Archives: philosophy

A Counterexample to Skepticism

The statement, “Either something happened or something didn’t happen,” is immune to skepticism.

If a skeptic tries to doubt it, then something has happened, making the statement true. If no one doubts it and nothing happened, then the statement is again true. Therefore you may have absolute certainty that something has or has not happened.

Moreover, this statement has it’s uses: I can imagine mothers all over the country trying to impress upon their teenagers to refrain using the word ‘like’. “Either something happened or something didn’t happen. Nothing ‘like happened’.”

Posted in epistemology, philosophy. Tagged with , , .

A note on epistemology

Justified true belief does not yield knowledge, and everyone should know this by now. Beyond Gettier’s argument, is this tack I heard given by Jaakko Hintikka:

You may believe something, fine, and have whatever justifications you wish. But how do you know the thing is true?

The point he was making was that far beyond the issue of problems in having the right sort of justifications is the problem of having truth as well. Whenever the Justified-true-belief scheme is used for knowledge the truth of the thing in question is whitewashed over: all the focus is put on the justification and the truth is assumed to exist separately.

For example if I make a claim P, then I clearly believe P, I will need to give justifications x, y, z, etc., and P needs to be true for me to count P to be part of my knowledge. The first two conditions are easy enough for me to demonstrate according to some standards, even if skepticism is still an issue. However, I, nor anyone else, has any ability to demonstrate the truth of P in ways over and above whatever I have given as my justification. Therefore Justified-true-belief reduces to Justified-belief, which no one accepts as knowledge.

Between this argument and Gettier, I see the Justified-true-belief scheme of knowledge as beyond saving. To recover some sense of knowledge, we can focus on this idea:

If you know something, then it is not possible to be mistaken.

There are two ways of dealing with this conditional. First, you can make your definition of what it is to know something always correspond with whatever you cannot be mistaken about. Besides being ad hoc, this sliding scale for knowledge does not correspond very well with what we generally take to be knowledge.

Secondly, we can make what it is not possible to be mistaken about correspond to our knowledge. Although you have already called foul, hear me out. If you were to find out certain things were wrong you might start to doubt your own sanity. For example if you were to find out all the basic things you ‘know’ were wrong – there is no such place as the United States, water is not comprised of oxygen and hydrogen, subjects and verbs are one and the same, you are currently not reading, etc., – you would have reason to worry (at least I would).

Therefore I suggest that knowledge is comprised of things that if they were to be false, then we would not be able to claim we were sane. This definition makes a distinction between things we can be mistaken about and things we cannot be mistaken about. To be mistaken about this second type of thing would entail an unacceptable consequence: if you are insane then you cannot claim to have knowledge.

Is this ad hoc, as above? No, because the definition of what would classify you as insane does not refer to knowledge specifically. For example take the statement, “If x, y and z are false then I am crazy.” No mention of knowledge whatsoever. Therefore this definition is not ad hoc.

Does this definition of knowledge correspond to our intuitions? Very much so: it is based specifically upon the everyday experiences we have and our most established theories of the world.

What about skepticism: can’t we always be mistaken? The skeptic here is asking us to imagine the unimaginable. If we do as the skeptic asks, then we would be required to imagine ourselves to be insane and tell the skeptic what we think as insane people. I can’t do this- I don’t even have a guess as to how to go about trying to do this.

In the end you are wagering your sanity in order to have a claim to knowledge. However, there is no danger in this bet because you hold all the cards: you know what you can imagine to be different. Therefore you gain a theory of knowledge and lose nothing.

Posted in argumentation, epistemology, mind, philosophy, wittgenstein. Tagged with , .

Demise, the Fallen and Annihilation

In Being and Time Heidegger makes a distinction between death and demise: death is the ending of Da-sein, or Being, and demise is physical perishing. I think this is a good distinction and since I break up ontology into 3 sorts of things – commitments, objects & descriptions – I will have three ways to die:

  1. Fallen: the perishing of all commitments of a living person.
  2. Demise: the perishing of physical attributes of a living person (traditional death).
  3. Annihilation: the perishing of all descriptions that a person has made.

Now Heidegger’s use of death was meant to be a fundamental orientation that Da-sein ‘has’ towards its own end (Those are his quotes around has, not mine- see p. 247 of B&T, p. 229 of Stambaugh) and demise was as above. Hence death and demise are somewhat separate because demise is the physical end and death is the way we are oriented to the end of being.

My view is that demise is one kind, a subset, of overall metaphysical death. I am less concerned here with the existential questions about death (though these are important) and more concerned with the ontological relationship between demise and other sorts of perishing. What follows is the insight separating overall metaphysical death from the three particular ways of perishing.

I’m using fallen in a (only somewhat) technical sense to mean the loss of all commitments. If you lose all capability to have commitments, then you have fallen, almost as in ‘fallen off the map.’ “Gone” is similar- you may not be physically dead, but if you are gone (e.g. to some foreign place never to return) you are dead to those with whom you had made commitments. Comatose, but without physical symptoms, is another example. You’re body may still live and for all anyone knows your mind may be as sharp as ever, but you are incapable of keeping commitments and are therefore ‘dead to the world’.

Demise is death as is traditionally defined: when you have met your demise your body is destroyed. Of course there may be some afterlife in which you may keep your commitments (think Ghost, the movie) or your descriptions of the world may continue (Plato will live forever through his writings – I wonder if someone, somewhere is discussing Plato at every instant of every day), but you’re physically dead as a doorknob after your demise.

Annihilation is the destruction of a person’s descriptions of the world. Describing things is perhaps the most basic of human accomplishments – we reward babies (and philosophers) handsomely for accurate descriptions – and if this is taken away from a person, then that person will not have even achieved the simplest of human accomplishments. Annihilating someone is making the world forget that he or she is a person: it is to become nameless. Perhaps the way to think of it is as in Kafka‘s Metamorphosis: Gregor is changed into a vermin/bug that has a working body and (for a while) can fulfill some commitments, but eventually is unable to communicate how his/its world has changed. At this point any future that Gregor had has been annihilated: the thing he became could continue living, but its life would bear no resemblance to what was formerly Gregor. If all evidence of Gregor’s history was erased, even if the thing he turned into still lived, then Gregor would be completely annihilated.

So to completely metaphysically die, you need to be dead (traditional), gone and forgotten.

Posted in Heidegger, metaphysics, ontology, philosophy. Tagged with , , , .

What is philosophy?

The question of what philosophy is always made me squirm. People would ask me what I do, I’d tell them, and then they would ask me what it exactly was that I do. But now I have a answer.

A while back I heard a quote attributed to Russell that went roughly:

Philosophy starts out with propositions that everyone would accept as true, and then ends up with propositions that no one would accept as true.

I thought this made philosophers sound like jerks, but there was something to it: we do end up in weird places for some reason. Here’s why:

Writing philosophy is like writing an instruction manual. You have some act or object or situation that you want to explain because it is hard to use or complicated or dangerous for some reason. So you set out to make a manual for the thing, starting from the most obvious and basic features. Now if you don’t know the thing perfectly, in and out, you end up having bad instructions, regardless of where you started. Then when you try to do something, or understand your object, when you follow the instructions you become hopelessly lost. Both your instructions and whatever the instructions were for are completely inscrutable. But if the instructions are good, then you can do things that were impossible for you to do before hand (program you VCR (or DVR), explain why mathematics is incomplete, that sort of thing). Philosophy is an attempt at writing instruction manuals for confusing things.

This answers the ontological questions of

  1. Whether or not philosophy is true: it is true if it accurately describes the phenomenon it is attempting to explain. However, since many times we are in the position of not knowing the phenomenon in question, philosophy is often of indeterminate truth.
  2. Why philosophy is inherently obscure: who ever reads the manual? (I do by the way)
  3. How best to characterize the strange layouts of philosophical treatises, a la manuals: the beginning is packed with warnings about what is wrong and and dangerous, then basic, most common functions are listed and the interesting and difficult features are buried in jargon somewhere towards the end.
  4. What are thought experiments: Thought experiments are to philosophy as visual aids/examples are to instruction manuals. They are not needed, but when you can connect the instructions to the actual objects you’re working with, everything becomes easier.

I’m sure this is somewhat silly but when someone presses me on what philosophy is, I’m telling them it’s pretty much writing instruction manuals for confusing stuff.

Posted in ontology, philosophy. Tagged with , .

Dependence Logic vs. Independence Friendly Logic

I picked up Dependence Logic: A New Approach to Independence Friendly Logic by Jouko Väänänen. I figure I’ll write up a review when I am finished with the book, but there is one chief difference between Dependence Logic and Independence Friendly Logic that needs to be mentioned.

On pages 44-47 when describing the difference between Dependence Logic and Independence Friendly Logic Väänänen says,

The backslashed quantifier,


introduced in ref. [20], with the intuitive meaning:

“there exists xn, depending only on xi0,…,xim-1, such that φ,”

The slashed quantifier,


used in ref. [21] has the following intuitive meaning:

“there exists xn, independently of xi0,…,xim-1, such that φ,”

which we take to mean

“there exists xn, depending only on variables other than xi0,…,xim-1, such that φ,”

The backslashed quantifier notation is part of what Väänänen calls ‘Dependence Friendly Logic’, and is equivalent to the ‘Dependence Logic’ that the rest of the book expounds. This backslash notation makes the difference between Dependence (Friendly) Logic and Independence Friendly Logic clear by showing that the former logic takes the notion of dependence to be fundamental whereas the latter takes independence to be fundamental. Väänänen takes this to be an advantage because he says that Dependence Logic avoids making

one ha[ve] to decide whether “other variable” refers to other variables actually appearing in a formula ?, or to other variables in the domain…

However, this treatment misses an important philosophical difference between Independence Friendly Logic and Dependence Logic. Dependence Logic is fundamentally based upon Wilfrid Hodges work, ‘Compositional Semantics for a language of imperfect information’ in Logic Journal of the IGPL (5:4 1997) 539-563, in which Hodges lays out a compositional semantics for languages such as Independence Friendly Logic using sets of assignments instead of individual assignments to determine satisfaction (T or F). Väänänen infers that Independence Friendly logic is just a bit unruly when it comes to specifying variables because he is working within a system that assumes sets of assignments are a useful and unproblematic way to determine satisfaction.

However the unseen problem of using sets of assignments is that something is added by assuming the domain is a set. For example, let’s take try to define a location and take the set of all the points in the universe. However, we immediately run into relativity: All locations are defined relative to each other and the people trying to figure out where things are, i.e. There is no predetermined set of all the points in the universe. The issue is that the domain of potential assignments, the objects in the universe, may be dependent upon the person or people using them (the players of the semantic game in this case). If the domain is dependent upon the players, the set cannot be constructed until after the players have begun the game. Therefore, if we postulate that the domain is a set at the outset then the players know something about the game that they are playing, namely that it does not depend upon them because it was predetermined.

Following this line of thought it seems possible to constructed a game in which the domain {Abelard, Eloise} is such that Abelard and Eloise are the actual people playing the game and the formula is ‘Someone x lost the game by instantiating this formula’ such that whoever instantiated that formula would win the game according to the rules. But then the formula would not be satisfied, so that player would have lost, but then it would be satisfied, a paradox. It is easy enough to declare that the domain must be independent of the players, but again this signals something about the game being played to the players before the formula to be is revealed.

Lastly there is something to be said about using logic to represent natural language here too: if you consider the set of all possible responses to some question, you are not ever considering all possible responses, but all the possible responses you can think of at that time. Therefore if we are using game semantics and imperfect information to represent natural language, then it is a mistake to predetermine the domain of all possible responses separate from the people involved. Again, the domain being linked to the people involved is at odds with the domain being a predetermined set.

Long story short, there is a very good reason for not always using sets of assignments to determine satisfaction. Depending on the situation, a set may offer non-trivial information about a game or misconstrue the game being played. Independence Friendly logic makes no assumptions about the type of game being played and is therefore of greater scope than logics that are based upon Hodges work. Of course one is free to use sets of assignments to determine satisfaction and derive set-theoretic results, but the compositionality gained comes at the price of limiting the types of games that can be played.

Posted in fun, game theory, independence friendly logic, internet, logic, philosophy, Relativity. Tagged with , , , , .

David Schrader on WNYC: The Brian Lehrer Show

or click here:
The Brian Lehrer Show: Heavy Thinking (April 07, 2008)

Posted in internet, philosophy. Tagged with , .

The Monty Hall Problem

[check out my more recent Monty Redux for, perhaps, a clearer exposition]

The Monty Hall Problem illustrates an unusual phenomenon of changing probabilities based upon someone else’s knowledge. On the game-show Let’s Make a Deal the host, Monty Hall, asks the contestant to choose one of three possibilities – Door One, Two or Three – with one door leading to a prize and the other two leading to goats. After the contestant selects a door, another door is opened, one with a goat behind it. At this point the contestant is allowed to switch the previously selected door with the remaining (unopened) door.

Common intuition is that this choice does not present any advantage because the probability of selecting the correct door is set at 1/3 at the beginning. Each door has this 1 out of 3 chance of having a prize behind it, so changing which door you select has no effect on the outcome.

In hindsight, this intuition is wrong. If you initially selected the first goat and then switch when you get a chance, you win. If you selected the second goat and switch, you win. If you selected the prize and switch, you lose. Therefore if you switch, you win 2 out of 3, whereas if you do not switch you win only 1/3 of the time.

So what has gone horribly wrong here:

  1. Why is most everyone’s intuition faulty in this situation?
  2. How does switching doors make any difference?
  3. When did the 1/3 probability turn into a 2/3 probability?

At the beginning of the game you have a 2 out of 3 chance of losing. Likewise the game show has a 2 out of 3 chance of winning (not giving you a prize) at the beginning of the game. Both of these probabilities do not depend upon which door the prize is behind, but only upon the set-up of a prize behind only one of three doors. For instance, an outside service (not the game show) could have set everything up such that both you and the game show would be kept in the dark: there would still be 2 goats and a prize, but neither you nor the game show would know which door led to the prize.

Now imagine that it is the game show that is playing the game. The game show is trying to win by selecting a goat. From this perspective, whichever door that was chosen is good: this door has a 2 out of 3 probability of being a winner (being a goat). Therefore when given the opportunity to change (after the outside service opens a door and shows a goat), there is no reason to do so.

Of course you, the contestant, are the one making the selection, and you do not want a goat. However, if you imagined yourself in the position of the game show at the beginning, as trying to select a goat, you would reasonably assume that, just as the game show did, you were successful in choosing a goat. When given the choice to switch, now that the other goat has been removed, it seemingly makes sense to change your selection.

In this case the easiest way to view the situation is in terms of how to lose, or by considering all the possible outcomes (as mentioned above). Though this is a guess, it seems that our first blush reaction to this problem is always to view it in terms of winning and this is the reason we do not immediately recognize the benefit in switching. We start out with a 1/3 chance of winning and switching doors doesn’t immediately seem to increase this percentage.

To answer how switching doors makes a difference we need to look more closely at the doors. The door that was initially selected has a 1 out of 3 chance of being a prize, and this does not change. If you were to play many times and ignore changing doors, then you would win 33.3% of the time. At the outset the other two doors each have the exact same chance of being a winner, 1 out of 3. So the other two doors combined have a 2 out of 3 chance of containing a winning door.

Now the game show changes the number of doors available from 3 to 2, with one door guaranteed to contain a prize. If you were presented this situation without knowledge of the previous process, then you would rightly put the chance of selecting the prize at 1 out of 2, 50%.

However, you know something about the setup: The door that was initially selected had a probability of having a prize behind it set at 1 out of 3. The thing behind the other door, though, has been selected from a stacked deck: Whatever is behind the door was selected from a group of objects with a 2 out of 3 chance of containing a prize (1/3 + 1/3). You know that the odds on this door are stacked in your favor because the game show knowingly reveals the goat: In the 2/3 case in which you have previously selected a goat, the prize is behind one of the other two doors. When the game-show reveals (and removes) a goat, it guarantees that the prize is behind the last door. Therefore switching doors at the end is equivalent to combining and selecting the probability associated with the two doors not initially selected.

If the game show did not knowingly reveal the goat, you would not be able to take advantage of the stacked deck. Imagine that you select the first door and then another door is opened randomly, revealing a goat. By randomly eliminating this door (and not looking behind the unselected doors) the door that was initially selected becomes unrelated to the present choice: Only by looking behind the unselected doors does the initial selection become fixed in reference to the other doors. Since no one looked behind the doors, some bored, but not malicious, demon could have come and switched whatever was behind the selected and remaining door and neither you nor the game-show would be able to tell. Therefore switching doors when a goat is randomly revealed provides no advantage because the initial selection cannot be related to the probable location of the prize.

Only when the contestant can fix the probable locations of the prize because the location of the prize is known by the game-show, is it possible to assign interdependent probabilities on the location of the prize and the previous selection made. The odds are then tilted in the contestant’s favor by switching away from the low probability initial selection to the door that has the combination of remaining probabilities.

The logic of this needs to be represented game-theoretically with the different quantifiers representing different players of a game of incomplete information. The game would run* like this:

Domain={prize, goat, goat}

Contestant Game Show
1. ∃x∃y∃z∀a/x,y,z∃b∀c/x,y,z(a=x & b=y & c=z)
2. ∃y∃z∀a/x,y,z∃b∀c/x,y,z(a=g & b=y & c=z)
3. ∃z∀a/x,y,z∃b∀c/x,y,z(a=g & b=g & c=z)
4. ∀a/x,y,z∃b∀c/x,y,z(a=g & b=g & c=p)
5. ∃b∀c/x,y,z(p=g & b=g & c=p)
6. ∀c/x,y,z(p=g & g=g & c=p)
7. ∀d∀c/x,y,z(d=g & g=g & c=p)
8. ∀c/x,y,z(g=g & g=g & c=p)
9. (g=g & g=g & p=p)

Line 1 is the initial setup of the prize game: the goal is for the contestant to make his or her placement of the prize and goats match the game show’s placement. Whatever is on the left side of an = will be what the contestant thinks is behind a door and what is on the right of an = will be what the game show puts behind the door, such that each = represents a door. If the formula is satisfied then the contestant will have successfully guessed the location of the prize.

Lines 2, 3 and 4 represent the results of the Game Show placing the prize and goats. Line 5 is the result of the first move of the contestant choosing where he or she thinks the prize is: the ‘a/x,y,z’ means that whatever placed in spot a has to be done independently, i.e. without knowledge, of what x or y or z is. Then the game show reveals a goat behind one of the doors not selected by the contestant. Line 7 represents the choice that is given to the contestant to switch his or her initial placement of where the prize is. Line 8 is the important step: since the contestant does not know what is behind the doors (c/x,y,z) it looks as if there is no advantage to switching. However, the contestant does know that when making a choice to reveal a goat in line 6 that at this point the game show had to know what was behind every door. This means that c is dependent upon b which was depended upon x, y, and z. With this knowledge the contestant can figure out that there is an advantage to switching because the selection of b in line 6 fixed the locations of the prize & goats and in doing so fixed the odds. Since the odds were intially stacked against the contestant, switching to the only remaining door flips the odds in the contestant’s favor, and is done so in this example. Line 9 shows that all the contestant’s choices match up with what the game show has placed behind the doors and hence she or he has won the prize.

* To do a better representation would require keeping the gameshow from not placing a prize anywhere by using a line like ‘x≠y or x≠z’. For graphical brevity I left it out.

Posted in game theory, independence friendly logic, logic, measurement, philosophy. Tagged with , , , .

Psychopharmacological Enhancement

The only ways to enhance the mind is to learn or evolve. Since evolution is out of our hands, all that is left is to learn.

Drugs that purport psychopharmacological enhancement do not do what their name states: they may change certain psychological factors but there is no drug that will make you smarter. This would be to eat from the tree of knowledge.

However drugs may be able to let you do things that you were unable previously, but this is nothing mysterious. If you do not breath enough oxygen, you will not be able to run. You get enough oxygen, you will be able to do more things. Now is oxygen a performance enhancing drug? It depends: the World Anti-Doping Agency recently ruled on oxygen tents (tents that vary the amount of oxygen inside) because using these tents can affect red blood cell counts. This example illustrates two things: that there is nothing inherently special about any particular chemical, be it oxygen or a newfangled drug, and secondly, that drugs only affect intermediary situations, not the final outcome.

The first point is that there is no moral dimension associated with the chemicals themselves. If it is possible to use the most fundamental of chemicals required for our survival in a way that could be seen as inappropriate, then any other chemical could be equally accused. If any chemical can be equally accused, then there is nothing unique about any individual chemical that makes its use morally wrong.

The second point is that drugs only have a specific range of effects. In the above example, the oxygen tents affect red blood cell count. An increased red blood cell count can be used to boost endurance, but this benefit will only appear under certain situations. The tents themselves do not increase endurance: they merely increase red blood cells. If a different drug was consumed to weaken the muscles, then the two ‘drugs’ would counteract each other and there would be no change in ability. Therefore it is not a drug that gives people an ability, such as endurance, but a drug may change how an ability is expressed.

The question is (and always was), “What do you want?” Since drugs have no moral dimension nor imbue the user with unknown (super-human) ability, the only issue is of fair play. Fair play in terms of other people and with your own goals. If you want to be able to lift heavy things, then you can use a machine, you can use drugs or you can work hard. Using a machine or drugs is to use someone else’s technology to assist whatever ability you have. If you use discipline to achieve the same results, then the technology that is being used is your own. Therefore if you are trying to play fair with others, then you have to ensure everyone has access to the same technology, be it machine or drug. If you are trying to achieve something yourself, then only you know whether or not using drugs makes a difference.

As we learn what is safe(r), we are going to have a fun future. Nothing changes our natural born ability or the hard work we have put in, but that has never stopped us from trying. Better drugs are on the way and this means options will be open to us that weren’t possible in the past. Good luck, be safe, have fun.

Posted in biology, ethics, evolution, fitness, mind, philosophy, technology. Tagged with , , , , .

Solved Philosophy

I was reading the philo-blogs and today (7 March) Richard Brown has taken issue with Richard Chappell’s Examples of Solved Philosophy. Brown holds that there is no such thing as solved philosophy (or problems are “only solved from a theoretical standpoint” and hence “involve substantial begging the question”), whereas Chappell happily provides examples that “are at least as well-established as most scientific results.”

Now there is something to be said for both sides: Brown is right when he says that all solutions are theory dependent and Chappell is right when says that we used to argue about certain things and now we don’t (don’t take this as my endorsement of his examples). However, this disagreement is just the two sides of one issue within philosophy of science: Thomas Kuhn’s scientific paradigms.
Thomas Kuhn stated that science evolves by scientific paradigm, that science works under one major governing theory until it is overthrown by another. For instance everyone worked within Newton’s version of the universe until Einstein came along, and now we work under Einstein’s relativity theory. Eventually it is possible that there will be a further paradigm shift away from relativity theory.

Now Brown, I think, makes the claim that philosophical problems (of the sort Chappell indicates) are not solved without question begging. Well, if Chappell is going for the sort of consensus that happens in science – which he looks like he is – then this is not a problem: All problems and solutions are determined within and by the overarching theoretical framework, the paradigm. This is to specifically say that there is no such thing as an answer to a question outside of some theoretical framework: some meta-theory always determines what sort of thing counts as a solution. Therefore Brown has conflated being part of a paradigm with begging the question. Begging the question involves assuming what you set out to prove, whereas being part of a paradigm merely assumes the general rules about what determines a solution to a problem when answering.

However, philosophy is not science, and Brown has a point when he says, “all we can mean by ’solved’ is ‘generally agreed to be true by philosophers/philosopher X’”. Now the paradigm cuts the other way: philosophy does not work by paradigms and hence there is no background framework on which Chappell can base his solved philosophy. Even if all the top philosophers of the day agree to an extent about a good number of issues, it only takes a Kant or a Wittgenstein to turn philosophy on its head. Even simple issues, what might be seen as obvious mistakes made only by laymen, can take on new significance. For example, many people believe that everyone’s perceptions of color are their own, that each of us can’t know what other peoples’ perception of color are like. Perhaps this is true, but personally I believe that it makes no sense to say that you have something if you logically exclude other people from having it (Philosophical Investigations #398) and therefore if you have color perceptions then I can have the same color perception. By no means should my view be taken as correct, but it should illustrate that there is nothing so simple as to be considered solved if all it has is a consensus.

So what of solved philosophy? Is it all just us shifting our assumptions around? The logician De Morgan recognized that his logic (the logic of antiquity until the mid 1800s) was too weak to derive the statement “All heads of horses are heads of animals.” With the advent of modern logic, the statement was derivable. This is an example of solved philosophy: At a certain point we had a problem, were unable to do something, and then later we were able to do it. If we want to solve philosophical problems we have to first find problems, phenomena that no theory can explain, and then find a way to explain it using the unique tools available to philosophers. Taking down bad theories and clarifying issues is a worthwhile endeavor, progress is made, but nothing is solved.

If anyone asks me about solved philosophy, I’ll tell them about the life and world-changing ideas that make philosophy amazing, not about all the bunk theories we had to go through to get there.

Posted in philosophy, science. Tagged with , , .

Computers, Intelligence and the Embodied Mind

This interview with Hubert Dreyfus (just the parts about computers: part 1, part 2. via Continental Philosophy) briefly outlines one of the major criticisms leveled against artificial intelligence: computers will never be intelligent because our intelligence is based upon our physical interactions in and with the world. Very briefly, our intelligence is fundamentally tied to our bodies because it is only through our bodies do we have any interaction with the world. If we separate our intelligence from the body, as in the case with computers, then whatever it is that the computer has, it is not intelligence because intelligence only refers to how to bodily interact with the world.

As Dreyfus says this problem is attributed to a Merleau-Ponty extension of Heidegger and the only proposed solution is to embody computers by providing them with a full representation of world and body. I don’t think there is generally much faith in this solution; I certainly don’t have much faith in it.

However, this bodily criticism is a straw man. Computers have ‘bodies,’ they are definitely physical things in the world. But what of the physical interactions required for intelligence? Computers interact with the world: computers are affected by heat, moisture, dirt, vibration, etcetera. The only differences are the actual interactions that computers have as compared to humans: we experience humidity one way and they experience it differently. So yes, computers will have different interactions and hence they will never have the same intelligence that we have, but that does not imply that computers cannot have an embodied intelligence. It only means that computer embodied intelligence will be significantly different than our own intelligence. Therefore the above argument against computer intelligence only applies to those people who are trying to replicate perfect human intelligence and does nothing against people trying to create intelligence in computers.

For example, light-skinned and dark-skinned people have very slightly different physiologies. Now I see the above argument as saying that someone of different skin color cannot have the same sort of intelligence that you have because their interactions with the world are inherently different. Sure, everyone experiences things slightly differently due to having different bodies, but to claim that this creates incompatible intelligences is obviously wrong: No one on the face of the earth would be able to communicate with each other due to everyone being physically unique.  Computers may be physically different to a greater extent, but this does not impact intelligence.

The criticism of computer intelligence based upon the need for a body is no more than subtle techno-racism.

Posted in metaphysics, mind, philosophy, science, technology. Tagged with , , , , .