Tag Archives: game theory

Shaking the Tree

Life often results in situations such that no strategy suggests any further moves. We just don’t know what to do next. In a game of perfect information, where each player knows all the previous moves, this can signal stalemate. Take chess: given both sides know everything that has transpired and have no reason to believe that the opponent will make a mistake, there can come a time when both sides will realize that there are no winning strategies for either player. A draw is then agreed upon.

The situation is not as simple in games of incomplete information. Let’s assume some information is private, that some moves in the game are only known to a limited number of players. For instance, imagine you take over a game of chess in the middle of a match. The previous moves would be known to your opponent and the absent player, but not to you. Hence you do not know the strategies used to arrive at that point in the game, and **your opponent knows that you do not know**.

Assume we are in a some such situation where we do not know all the previous moves and have no further strategic moves to make. This is to say we are waiting, idling, or otherwise biding our time until something of significance happens. Formally we are at an equilibrium.

A strategy to get out of this equilibrium is to “shake the tree” to see what “falls out”. This involves making information public that was thought to be private. For instance, say you knew a damaging secret to someone in power and that person thought they had successfully hidden said secret. By making that person believe that the secret was public knowledge, this could cause them to act in a way they would not otherwise, breaking the equilibrium.

How, though, to represent this formally? The move made in shaking the tree is to make information public that was believed to be private. To represent this in logic we need a mechanism that represents public and private information. I will use the forward slash notation of Independence Friendly Logic, /, to mean ‘depends upon’ and the back slash, , to mean ‘independent of.’

To represent private strategy Q, based on secret S, and not public to party Z we can say:

Secret Strategy) If, and only if, no one other than Y depends upon the Secret, then use Strategy Q
(∀YS) (∃z/S) ~(Y = z) ⇔ Q

To initial ‘shaking the tree’ would be to introduce a new dependency:

Tree Shaking) there is someone other than Y that depends on S
(∃zS) ~(Y = z)

Tree Shaking causes party Y’s to change away from Strategy Q since Strategy Q was predicated upon no one other than Y knowing the secret, S. The change in strategy means that the players are no longer idling in equilibrium, which is the goal of shaking the tree.

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , .

An Introduction to the Game Theoretic Semantics view of Scientific Theory

What is a scientific theory?  In an abstract sense, a scientific theory is a group of statements about the world.  For instance the Special Theory of Relativity has, “The speed of light in a vacuum is invariant,” as a core statement, among others, about the world.  This statement is scientific because, in part, it is meant to hold in a ‘law-like’ fashion: it holds across time, space and observer.

The Popperian view is that we have scientific theories and we test those theories with experiments.  This means that given a scientific theory, a set of scientific statements about phenomena, we can deductively generate predictions.  These predictions are further statements about the world.  If our experiments yield results that run counter to what the theory predicts — the experiments generate statements that contradict the predictions, the theory did not hold across time, space or observer — then the theory eventually becomes falsified.  Else the theory may be considered ‘true’ (or at least not falsified) and it lives to fight another day.

The game theoretic semantics (GTS) view is that truth is the existence of a winning strategy in a game.  In terms of the philosophy of science, this means that our theories are strategic games (of imperfect information) played between ourselves and Nature.  Each statement of a theory is a description of a certain way the world is, or could be.  An experiment is a certain set of moves — a strategy for setting up the world in a certain way — that yields predicted situations according to the statements of the theory.  If our theory is true and an experiment is run, then this means that there is no way for Nature to do anything other than yield the predicted situation.  Said slightly differently: truth of a scientific theory is knowing a guaranteed strategy for obtaining a predicted Natural outcome by performing experiments.  If the strategy is executed and the predicted situations do not obtain, then this means that Nature has found a way around our theory, our strategy.  Hence there is no guaranteed strategy for obtaining those predictions and the theory is not true.

An example:

Take Galileo’s famous experiment of dropping masses off the Tower of Pisa.  Galileo’s theory was that objects of different mass fall at equal rates, opposing the older Aristotelian view that objects of greater mass fall faster.

According to the Popperian view Galileo inferred from his theory that if he dropped the two balls of different mass off the tower at the same time, they would hit the ground at the same time.  When he executed the experiment, the balls did hit the ground at the same time, falsifying the Aristotelian theory and lending support to his theory.

The GTS view is that dropping balls of unequal mass off a tower is a strategic game setup.  This experimental game setup is an instance of a strategy to force Nature to act in a certain way, namely to have the masses hit at the same time or not.  According to Galilean theory, when we are playing this game with Nature, Nature has no choice other than to force the two masses to hit the ground at the same time.  According to Aristotelian theory, when playing this game, Nature will force the more massive ball to hit the ground first.  History has shown that every time this game is played, the two masses hit the ground at the same time.  This means that there is a strategy to force Nature to act in the same way every time, that there is a ‘winning strategy’ for obtaining this outcome in this game with Nature.  Hence the Galilean theory is true: it got a win over the Aristotelian theory.

Why you might want to consider doing things the GTS way:

GTS handles scientific practice in a relatively straightforward way.  Theories compete against Nature for results and against each other for explanatory power.  Everything is handled by the same underlying logic-game structure.

GTS is a powerful system.  It has application to  game theory, computer science, decision theory, communication and more.

If you are sympathetic to a Wittgensteinian language game view of the world, GTS is in the language game tradition.

More on GTS:

http://plato.stanford.edu/entries/logic-games/
https://en.wikipedia.org/wiki/Game_semantics

Posted in game theory, logic, philosophy, science. Tagged with , , , , .

Risky Kakanomics

Gloria Origgi writes:

This is an application of the theory of kakonomics, that is, the study of the rational preferences for lower-quality or mediocre outcomes, to the apparently weird results of Italian elections. The apparent irrationality of 30% of the electorate who decided to vote for Berlusconi again is explained as a perfectly rational strategy of maintaining a system of mediocre exchanges in which politicians don’t do what they have promised to do and citizens don’t pay the taxes and everybody is satisfied by the exchange. A mediocre government makes easier for mediocre citizens to do less than what they should do without feeling any breach of trust.

She argues that if you elect a crappy politician, then there is little chance of progress, which seems like a bad thing. People do this, though, because maintaining low political standards allows people to have low civic standards: if the politicians are corrupt, there is no reason to pay taxes. Likewise, the politicians who have been elected on the basis of being bad leaders have no incentive to go after tax cheats, the people who put them in office. Hence there is often a self-serving and self-maintaining aspect to making less than optimal decisions: by mutually selecting for low expectations, then everyone cooperates in forgiving bad behavior.

This account assumes that bad behavior of some sort is to be expected. If someone all of a sudden starts doing the ‘right thing’ it will be a breach of trust and violating the social norm. There would be a disincentive to repeat such a transaction again, because it challenges the stability of the assumed low quality interaction and implied forgiveness associated with it.

I like Origgi’s account of kakonomics, but I think there is something missing. The claim that localized ‘good interactions’ could threaten the status quo of bad behavior seems excessive. Criticizing someone who makes everyone else look bad does happen, but this only goes to show that the ‘right’ way of doing things is highly successful. It is the exception that proves the rule: only the people in power — those that can afford to misbehave — really benefit from maintaining the low status quo. Hence the public in general should not be as accepting of a low status quo as a social norm, though I am sure some do for exactly the reasons she stated.

This got me thinking that maybe there was another force at work here that would support a low status quo. When changing from one regime to another, it is not a simple switch from one set of outcomes to the other. There can be transitional instability, especially when dealing with governments, politics, economics, military, etc. If the transition between regimes is highly unstable (more so if things weren’t that stable to begin with) then there would be a disincentive to change: people won’t want to lose what they have, even if it is not optimal. Therefore risk associated with change can cause hyperbolic discounting of future returns, and make people prefer the status quo.

Adding high risk with the benefits of low standards could make a formidable combination. If there is a robust black market that pervades most of the society and an almost certain civil unrest given political change (throw in a heavy-handed police force, just for good measure), this could be strong incentive to not challenge an incumbent government.

Posted in economics, game theory, mind, philosophy. Tagged with , , , .

Metta World Peace, James Harden and Furbizia

Everyone is saying that Metta World Peace (the basketball player formerly known as Ron Artest) is crazy for elbowing James Harden in the face. I can’t say that I disagree, but I think there is more to the story.

Did no one else notice that James Harden walked right into World Peace while he was celebrating? Watch the video. Harden walks directly into MWP. He doesn’t do anything that would cause a foul, but if he were going to actually try to receive an inbound pass, he would have walked away from opposing players, not at them.

Instead he gets real close to an ecstatic opponent known for outbursts. I’m sure he didn’t want to get elbowed in the head, but he did put himself in a position to get fouled, to take the charge as it were. If he had only been fouled, not elbowed, by a jumping MWP then people might be talking about how Harden had cleverly gotten another foul on one of the Laker’s best players through strategic gamesmanship alone.

When asked if he would shake Harden’s hand, Metta World Peace said he wouldn’t. Everyone condemned him for this too, but I’m with MWP this time. If MWP sees Harden as having taken advantage of his celebration as a cheap way to get him in trouble, then it is understandable that he wouldn’t want to shake the man’s hand. This doesn’t excuse the elbow, but it does explain the attitude.

Posted in economics, game theory. Tagged with , .

Trembling Hands

At least since Selten (1975) game theorists have considered that given a series of decisions there is some small probability that the person making the decisions will make a mistake and do something irrational, even if she knows the right thing to do.  This is called the trembling hand approach: although a person rationally knows the right (rational) thing to do, sometimes her hand trembles and she chooses incorrectly.

Therefore, given a game defined by a finite set of iterated decisions and payoffs in which all the rational moves are known by both players (think Tic Tac Toe), there is a ‘perturbed’ game in which the rational choices are not made.  So consider playing a game of Tic Tac Toe:  Either player can always force a draw in Tic Tac Toe and hence prevent loss.  However, it is easy enough to make a mistake (through inattentiveness, eg) and allow your opponent to win.

Tic Tac Toe Game Tree

Tic Tac Toe Game Tree showing possible decisions for the first two moves

I believe this approach is a good start but does not go nearly far enough to incorporate probability into game theory.  The issue stems from the trembling hand approach assuming that irrational behavior occurs because of ‘some unspecified psychological mechanism.’  This is fine, but then every trembling hand probability, every chance of making an irrational decision, is defined as a separate, independent probability.  This means that making an irrational decision is based on chance, as if we roll a die every decision we make.

Perhaps some people have this problem, that they act irrationally at probabilistic rates, but this doesn’t seem either realistic, or fit with the idea that a psychological mechanism was at work.  If some psychological mechanism was at work, then we would expect

  1. The probabilities of making mistakes would not be independent of each other, since they have a common source.
  2. There would be a much higher chance of irrationality at times when the psychological issue manifests itself.

One example of what I have in mind is the effectiveness of gamesmanship in sport.  Gamesmanship is the art of getting into your opponents head and causing them to make mistakes.  Consider this description of “furbizia” in Italian soccer by Andrea Tallarita:

Perhaps nothing has been more influential in determining the popular perception of the Italian game than furbizia, the art of guile… The word ‘furbizia’ itself means guile, cunning or astuteness. It refers to a method which is often (and admittedly) rather sly, a not particularly by-the-book approach to the performative, tactical and psychological part of the game. Core to furbizia is that it is executed by means of stratagems which are available to all players on the pitch, not only to one team. What are these stratagems? Here are a few: tactical fouls, taking free kicks before the goalkeeper has finished positioning himself, time-wasting, physical or verbal provocation and all related psychological games, arguably even diving… Anyone can provoke an adversary, but it takes real guile (real furbizia) to find the weakest links in the other team’s psychology, then wear them out and bite them until something or someone gives in – all without ever breaking a single rule in the book of football. (via)

If we try to explain the an instance of someone making an irrational play in a game due to gamesmanship/furbizia according to the trembling hand model, we run into difficulty.  The decision tree according to the ‘trembling hand’ theory would have a series of decisions each with a low probability of making an irrational mistake:

.01 — .01 — .01 — .01 — .01

Hence it cannot explain why someone would crack later in the game as opposed to earlier, since all the probabilities are equal.  Nor can it explain why people make irrational decisions at higher rates when playing against a crafty opponent than they would make otherwise. Therefore the trembling hand model cannot explain the effectiveness of gamesmanship.

But the decision tree given linked, non-independent probabilities might have the chance of an irrational decision given by:

.01 — .05 — .1 — .17 — .25

This model has an increasing chance of irrational action.  As time progresses, it becomes increasingly likely that an irrational choice will occur due to the gamesmanship of the opponent.

I’ll refer to this model generally as induced irrationality.  Induced irrationality occurs when the chance of making a rational decision decreases due to some factor, or when the chances of making irrational decisions over time change in concert, or both.

Other phenomena follow this pattern.  Bullying comes to mind: it is similar to gamesmanship in its breaking or bending of ‘rules’ over time to get in someone’s head and thence get them to do things they would rather not do.  The bullied will act irrationally in the presence of the bully and potentially more so as the bullying continues, perhaps even leading to “snapping”— doing something seriously irrational.

Phobias are also similar: for whatever reason a person has a phobia, and given the presence of that object or situation, the otherwise rational person will make different decisions.

Moreover this may have something to do with the Gambler’s Fallacy:   By making a gambler associate a pattern to some random act, such as by showing the gambler all the recent values of a roulette wheel in order to convince the gambler to believe that the wheel likely will land on red (or losing a few bets to a shill in 3 card monte, or seeing a pattern in the stock market, etc.), the casino has planted a belief in the gambler.  Hence, as time goes on and red is not landed upon, the gambler increasingly thinks it is ever more likely that red will hit (even though it has the same low chance as it always did). Hence the gambler will likely bet more later — more irrationally —  as he expects red to be increasingly likely to hit.

Hence, though trembling hands may be a factor in irrational decision making, it does not seem like it is the only possibility or even the most significant in a number of interesting cases.

—————————————

Selten, R. (1975). ‘Re-examination of the Perfectness Concept for Equilibrium Points in Extensive Games.’ International Journal of Game Theory, 4: 22–55.

My brother beat the Tic Tac Toe playing chicken when the Chinatown Fair Arcade (NYC) still operated.  I assume that there was a computer choosing the game moves and it happened to glitch when my brother was playing: though the machine claimed it won, if you looked at the Xs and Os, my brother had won.  We asked the manager for our promised bag of fortune cookies.  He said he didn’t actually have a bag since the chicken wasn’t ever supposed to lose.

Posted in economics, game theory. Tagged with , .

Wittgenstein and Sun Tzu (on throwing the ladder away)

Wittgenstein, Tractatus Logico-Philosophicus #6.54

My Propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them — as steps — to climb beyond them.  (He must, so to speak, throw away the ladder after he has climbed up it.)

He must overcome these propositions, and then he will see the world aright.

Sun Tzu, The Art of War, Chapter XI #38

At the critical moment, the leader of an army acts like one who has climbed up a height and then kicks away the ladder behind him.  He carries his men deep into hostile territory before he shows his hand.

I haven’t heard or seen too many uses of the concept of “throwing away the ladder.”  It seems interesting, though coincidental, that it shows up in these two places.

Wittgenstein is discussing the end of philosophy, how once you understand his statements in the Tractatus, you will understand how to move beyond thinking in those terms.  And then everything will be solved.

Sun Tzu, on the other hand, is discussing how a leader can get the most out of those under her command by preventing retreat.  The famous examples are of Hsiang Yu, and later Cortez, who burnt their ships behind them to prevent mutiny and ensure that their troops would fight as if their lives depended upon it (because they did).

Sun Tzu and Wittgenstein may be two of the most commented upon authors of all time.  However, I don’t think either could have the other’s meaning in these passages, or at least I’ve never seen any commentary to that effect.  However, this does not mean there is nothing to be learned:

For Wittgenstein, the recognition of the nonsensical is what is doing the work.  His words are nonsensical and the realization of this is what allows you to move beyond them, to something better (says he).  So by doing as he says, by recognizing his words as nonsensical, your retreat is prevented, because no one, save a mad man, would willingly return to a nonsensical philosophy when a better one exists.  By climbing the ladder, you also discard it.

Compare this to Philosophical Investigations #309:

What is the aim in philosophy?– To shew the fly the way out of the fly-bottle.

The fly-bottle, a supposedly one way process, Wittgenstein is trying to walk back…  In the Philosophical Investigations he’s trying to climb down the discarded ladder.

Posted in game theory, random idiocy, wittgenstein. Tagged with , .

Rock Paper Scissors

Rock Paper Scissors is a game in which 2 players each choose one of three options: either rock, paper or scissors.  Then the players simultaneously reveal their choices.  Rock beats scissors but loses to paper (rock smashes scissors); Paper beats rock and loses to scissors (paper covers rock); Scissors beats paper but loses to rock (scissors cut paper).  This cyclical payoff scheme (Rock > Scissors, Scissors > Paper, Paper > Rock) can be represented by this rubric:

Child 2
rock paper scissors
Child 1 rock 0,0 -1,1 1,-1
paper 1,-1 0,0 -1,1
scissors -1,1 1,-1 0,0
.
(ref: Shor, Mikhael, “Rock Paper Scissors,” Dictionary of Game Theory Terms, Game Theory .net,  <http://www.gametheory.net/dictionary/Games/RockPaperScissors.html>  Web accessed: 22 September 2010)

However, if we want to describe the game of Rock Paper Scissors – not just the payoff scheme – how are we to do it?

Ordinary logics have no mechanism for representing simultaneous play.  Therefore Rock Paper Scissors is problematic because there is no way to codify the simultaneous revelation of the players’ choices.

However, let’s treat the simultaneous revelation of the players’ choices as a device to prevent one player from knowing the choice of the other.  If one player were to know the choice of the other, then that player would always have a winning strategy by selecting the option that beats the opponent’s selection.  For example, if Player 1 knew (with absolute certainty) that Player 2 was going to play rock, then Player 1 would play paper, and similarly for the other options.  Since certain knowledge of the opponent’s play trivializes and ruins the game, it is this knowledge that must be prevented.

Knowledge – or lack thereof – of moves can be represented within certain logics.  Ordinarily all previous moves within logic are known, but if we declare certain moves to be independent from others, then those moves can be treated as unknown.  This can be done in Independence Friendly Logic, which allows for explicit dependence relations to be stated.

So, let’s assume our 2 players, Abelard (∀) and Eloise (∃) each decide which of the three options he or she will play out of the Domain {r, p, s} .  These decisions are made without knowledge of what the other has chosen, i.e. independently of each other.

∀x ∃y/∀x

This means that Abelard chooses a value for x first and then Eloise chooses a value for y.  The /∀x next to y means that the choice of y is made independently from, without knowledge of the value of, x.

R-P-S: ∀x ∃y/∀x (Vxy)

The decisions are then evaluated according to V, which is some encoding of the above rubric like this:

V: x=y → R-P-S &
x=r & y=s → T &
x=r & y=p → F &
x=p & y=r → T &
x=p & y=s → F &
x=s & y=p → T &
x=s & y=r → F

T means Abelard wins; F means Eloise wins.  R-P-S means play more Rock Paper Scissors!

Johan van Benthem, Sujata Ghosh and Fenrong Liu put together a sophisticated and generalized logic for concurrent action:
http://www.illc.uva.nl/Publications/ResearchReports/PP-2007-26.text-Jun-2007.pdf

Posted in game theory, independence friendly logic, logic, philosophy. Tagged with , , , .

Sexual Reproduction

Say you are a single celled organism.  To reproduce you have to double your size and then you need to split yourself in half.  Repeat indefinitely.

Now say you are a single celled organism that has the option to reproduce sexually.  To reproduce you need to increase yourself to 3/2 your original size and find a similar mate.  Then you both contribute 1/2 to the new organism and repeat indefinitely.

Asexual reproduction requires you to double in size; sexual reproduction requires only a 3/2 increase.  Therefore the turn-around time for sexual reproduction is inherently shorter than for asexual reproduction (assuming there are viable mates readily available).

Is there a selective benefit to a shorter turn around time for reproduction?  If the species must constantly be adapting to a changing environment (that would be everyone), then having a higher rate at which new mutations (and thence adaptations) are introduced into the population is critical.

Secondly, given that there is enough food but it takes time to collect, I count more offspring for sexual reproduction:

Sexual Replication vs. Asexual Splitting

In sexual reproduction, there is an additional child from the first generation of children (as compared to asexual splitting) created in the same amount of time: At the +50% mark #1 & #2 mate to create #5, and #3 & #4 mate to create #6.  Then, at the 100% mark (or plus an additional 50%) #1 & #2 mate to create #7, #3 & #4 mate to create #8, and, at the same time, the initial children #5 & #6 mate to create #9.  #9 is also one generation ahead of the offspring of asexual replication.

Now, to be honest, I’m confused.  I don’t think that anything above is particularly complicated.  However, Wikipedia does not note this as a benefit of sexual reproduction.  It actually says that asexual reproduction is much faster.  This makes me think that I must have made a mistake or else someone would have added it.

The going theory appears to be that since every organism in an asexually reproducing species can give off children, then there is twice the potential for offspring.  This completely ignores any struggle that an organism might have that would prevent it from reproducing, or that work can be split with a mate making it easier to reproduce.

My main assumptions are, among others, that there already is a significant population of organisms, the organisms are not too fussy about their mates (no significant waste of energy searching for a mate),  energy / work is being split with the mate, and that the limiting factor has to do with gathering food.  I can’t see how, if these (reasonable?) assumptions hold, sexual reproduction isn’t the dominant, winning strategy.

Posted in biology, evolution, fitness, game theory, philosophy, science. Tagged with , , , , , , .

The Deal with ‘Deal or No Deal’

I just saw the hit game show ‘Deal or No Deal‘.  It wasn’t the first time, but this episode had a contestant with folksiness to rival Palin, so I was entertained and kept watching.

But is there any gamesmanship to the ‘Deal or No Deal’ gameshow?  The short answer is: No.

The show begins with the contestant choosing a briefcase that contains a number that represents a real monetary amount.  The case is chosen from a group of 26 cases, with the monetary amounts ranging from a penny to a million dollars.  Recently, to up the suspense, the show has removed some of the lower amounts of money and replaced them with more million dollar cases.

The show I saw had 8 of the 26 cases carrying the million dollar value.  So when the contestant makes the initial selection, there is a slightly less than 1/3 chance of picking a million dollar case.  This case is then set aside.

The contestant then proceeds to pick other cases which are immediately opened, revealing the monetary amount they represent.  These cases are removed from the pool of cases.  After a few cases have been removed, the contestant is offered a sum of money to stop playing.  If many of the cases that have been removed were low in value, i.e. most of the million (and other high value) cases remain, then the offer will be closer to the high value cases.  If many of the high value cases have been removed, then the offer will be closer to the lower values.  Usually the value is somewhere in the middle.

These offers are made periodically when there are many cases remaining and are made after every case for the last few.  If you go all the way to the end, then you receive whatever value is in the case you initially selected.

If winning the big prize is the goal, however, all the offers are completely irrelevant.  At the outset the case the contestant chooses has a 1/3 chance of containing the big prize.  This doesn’t change throughout the game.  Let me explain why:

The rest of the cases have the same approximate ratio of million dollar values to non-million dollar values, which the contestant chooses to open randomly.  Therefore most of the time (logically speaking and whenever I watched) this ratio stays constant all the way to the end of the game.  2 cases out of the last 6 were million dollar cases in the episode I just saw.

Of course the possibility exists that the contestant will choose all of the lower value cases such that only million dollar cases remain and hence the case he or she initially chose will necessarily be a million dollar case.

However, imagine this analogous situation.  Try to pick all the cards other than Jack, Queen, King and Ace out of a shuffled deck without looking.  What will happen is that a selection of cards will be chosen irrespective of value, randomly, leaving approximately the same ratio of face cards to non-face cards remaining  (Go try it if you don’t believe me).  The chances of picking only the low values are very small.  Deal of No Deal has been on for years here in the USA and this has never happened.  The recent, and only, million dollar winner still had to choose on the last remaining case. So this part of the game has little ultimate impact upon knowing whether or not you have selected a million dollar case.

Secondly, since the cases are opened randomly during the show, no Monty Hall-like insight can be gained as to whether or not a winning case was initially selected.  Therefore the initial probability of 1/3 remains unchanged throughout the show and all the song and dance of selecting and opening the cases is a red herring (though it is top notch song and dance provided by Mr. H. Mandel and models).

This leaves the contestant in the position of deciding whether or not to accept the offer made to stop playing part way through the game without any new information.  Since the ratio of remaining monetary values remains somewhat constant, the offer made to buy the contestant out of playing should remain somewhat stable for most of the game.  It appears however, according to Wikipedia, that the initial offers are kept artificially low to build suspense, but at the end the offers are where the mathematicians say they should be.

The decision then comes down to how badly the contestant wants/ needs the money.  If the money offered to stop playing becomes large enough to significantly, to the contestant’s mind, make a big difference, he or she will likely take the money rather than take the 2/3 chance of winning significantly less.  This is what happened during the episode today: after it was made known late in the game that a sponsor was going to make a matching donation to a national charity the lady supported, she became too afraid of losing the large amount of money that was already offered, even though she said she wanted to go till the end.

In the end, the deal with ‘Deal or No Deal’ is that it is a great deal for those who get to play.  However, it is not much of a game.  The only trick is to get yourself on the show and after that how much you take home is up to luck.

Posted in game theory, logic. Tagged with , .

What are Quantifiers?

What are quantifiers?  Quantifiers have been thought of things that ‘range over’ a set of objects.  For example, if I say

There are people with blue eyes

this statement can be represented as (with the domain restricted to people):

∃x(Bx).

This statement says that there is at least one person with property B, blue eyes. So the ‘Ex’ is doing the work of looking at the people in the domain (all people) and picking out one with blue eyes.  Without this ‘∃x’ we would just have Bx, or x has blue eyes.

This concept of ‘ranging over’ and selecting an individual with a specific property out of the whole group works in the vast majority of applications.  However, I’ve pointed out a few instances in which it makes no sense to think of the domain as a predetermined group of objects, such as in natural language and relativistic situations.  In these cases the domain cannot be defined until something about the people involved are known, if at all; people may have a stock set of responses to questions but can also make new ones up.

So, since the problem resides with a static domain being linked to specific people, I suggest that we find a way to link quantifiers to those people.  This means that if two people are playing a logic game, each person will have their own quantifiers linked to their own domain.  The domains will be associated with the knowledge (or other relevant property) of the people playing the game.

We could index individual quantifiers to show which domain they belong to, but game theory has a mechanism for showing which player is making a move by using negation.  When a negation is reached in a logic game, it signals that it is the other player’s turn to make a move.  I suggest negation should also signal a change in domains, as to mirror the other player’s knowledge.

Using negation to switch the domain that the quantifiers reference is more realistic/ natural treatment of logic: when two people are playing a game, one may know certain things to exist that the other does not.  So using one domain is an unrealistic view of the world because it is only in special instances that two people believe the exact same objects to exist in the world.  Of course there needs to be much overlap for two people to be playing the same game, but having individual domains to represent individual intelligences makes for a more realistic model of reality.

Now that each player in a game has his or her own domain, what is the activity of the quantifier?  It still seems to be ranging over a domain, even if the domain is separate, so the problem raised above has not yet been dealt with.

Besides knowing different things, people think differently too.  The different ways people deal with situations can be described as unique strategies.  Between the strategies people have and their knowledge we have an approximate representation of a person playing a logic game.

If we now consider how quantifiers are used in logic games, whenever we encounter one we have to choose an element of the domain according to a strategy.  This strategy is a set of instructions that will yield a specified result and are separate from the domain. So quantifiers are calls to use a strategy as informed by your domain, your knowledge.  They do not ‘range over’ the domain; it is the strategies a person uses that take the domain and game (perhaps “game-state” is more accurate at this point) as inputs and returns an individual.

The main problem mentioned above can now be addressed: Instead of predetermining sets objects in domains, what we need to predetermine are the players in the game. The players may be defined by a domain of objects and strategies that will be used to play the game, but this only becomes relevant when a quantifier is reached in the game.  Specifying the players is sufficient because each brings his or her own domain and strategies to the game, so nothing is lost, and the domain and strategies do no have to be predefined because they are initially called upon within the game, not before.

I don’t expect this discussion to cause major revisions to the way people go about practicing logic, but I do hope that it provides a more natural way to think about what is going on when dealing with quantifiers and domains, especially when dealing with relativistic or natural language situations.

Posted in epistemology, game theory, logic, philosophy. Tagged with , , , , , , .