“…What Fifty Shades of Grey offers is an extreme vision of late-capitalist deliverance, the American (wet) dream on performance-enhancing drugs. Just as magazines such as Penthouse, Playboy, Chic, and Oui (speaking of aspirational names) have effectively equated the moment of erotic indulgence with the ultimate consumer release, a totem of the final elevation into amoral privilege, James’s trilogy represents the latest installment in the commodified sex genre. The money shot is just that: the moment when our heroine realizes she’s been ushered into the hallowed realm of the 1 percent, once and for all.”
What is a scientific theory? In an abstract sense, a scientific theory is a group of statements about the world. For instance the Special Theory of Relativity has, “The speed of light in a vacuum is invariant,” as a core statement, among others, about the world. This statement is scientific because, in part, it is meant to hold in a ‘law-like’ fashion: it holds across time, space and observer.
The Popperian view is that we have scientific theories and we test those theories with experiments. This means that given a scientific theory, a set of scientific statements about phenomena, we can deductively generate predictions. These predictions are further statements about the world. If our experiments yield results that run counter to what the theory predicts — the experiments generate statements that contradict the predictions, the theory did not hold across time, space or observer — then the theory eventually becomes falsified. Else the theory may be considered ‘true’ (or at least not falsified) and it lives to fight another day.
The game theoretic semantics (GTS) view is that truth is the existence of a winning strategy in a game. In terms of the philosophy of science, this means that our theories are strategic games (of imperfect information) played between ourselves and Nature. Each statement of a theory is a description of a certain way the world is, or could be. An experiment is a certain set of moves — a strategy for setting up the world in a certain way — that yields predicted situations according to the statements of the theory. If our theory is true and an experiment is run, then this means that there is no way for Nature to do anything other than yield the predicted situation. Said slightly differently: truth of a scientific theory is knowing a guaranteed strategy for obtaining a predicted Natural outcome by performing experiments. If the strategy is executed and the predicted situations do not obtain, then this means that Nature has found a way around our theory, our strategy. Hence there is no guaranteed strategy for obtaining those predictions and the theory is not true.
Take Galileo’s famous experiment of dropping masses off the Tower of Pisa. Galileo’s theory was that objects of different mass fall at equal rates, opposing the older Aristotelian view that objects of greater mass fall faster.
According to the Popperian view Galileo inferred from his theory that if he dropped the two balls of different mass off the tower at the same time, they would hit the ground at the same time. When he executed the experiment, the balls did hit the ground at the same time, falsifying the Aristotelian theory and lending support to his theory.
The GTS view is that dropping balls of unequal mass off a tower is a strategic game setup. This experimental game setup is an instance of a strategy to force Nature to act in a certain way, namely to have the masses hit at the same time or not. According to Galilean theory, when we are playing this game with Nature, Nature has no choice other than to force the two masses to hit the ground at the same time. According to Aristotelian theory, when playing this game, Nature will force the more massive ball to hit the ground first. History has shown that every time this game is played, the two masses hit the ground at the same time. This means that there is a strategy to force Nature to act in the same way every time, that there is a ‘winning strategy’ for obtaining this outcome in this game with Nature. Hence the Galilean theory is true: it got a win over the Aristotelian theory.
Why you might want to consider doing things the GTS way:
GTS handles scientific practice in a relatively straightforward way. Theories compete against Nature for results and against each other for explanatory power. Everything is handled by the same underlying logic-game structure.
GTS is a powerful system. It has application to game theory, computer science, decision theory, communication and more.
If you are sympathetic to a Wittgensteinian language game view of the world, GTS is in the language game tradition.
Philosophy is disparaged often enough, and by people who ought to know better. As of late, every time this happens I think of this scene — but with the text (something like) below…..
Oh. Okay. I see.
You think this has nothing to do with you.
You go to your desk and you select, I don’t know, some statistical mathematical model, for instance, because you’re trying to show the world that you take science seriously and follow what you think are established scientific practices.
But what you don’t know is that that mathematical model is not just established science.
It’s not a data model. It’s not a model of phenomena.
It’s actually a deductive nomological model.
And you’re also blithely unaware of the fact that in 1277, the Bishop of Paris proclaimed that a multiplicity of worlds could exist.
And then I think it was Pascal, wasn’t it, who argued that probabilistic mathematics could be applied to situations?
And then mathematical models quickly showed up in many different philosophies.
And then it, uh, filtered down through to natural philosophy and then trickled on down into some basic handbook of science, where you, no doubt, adopted it without another thought.
However, that statistical model represents millions of hours and countless lives. And it’s sort of comical how you think that you’ve made a choice that exempts you from philosophy, when, in fact, you’re using ideas that were selected for you by the people in this room.
This is an application of the theory of kakonomics, that is, the study of the rational preferences for lower-quality or mediocre outcomes, to the apparently weird results of Italian elections. The apparent irrationality of 30% of the electorate who decided to vote for Berlusconi again is explained as a perfectly rational strategy of maintaining a system of mediocre exchanges in which politicians don’t do what they have promised to do and citizens don’t pay the taxes and everybody is satisfied by the exchange. A mediocre government makes easier for mediocre citizens to do less than what they should do without feeling any breach of trust.
She argues that if you elect a crappy politician, then there is little chance of progress, which seems like a bad thing. People do this, though, because maintaining low political standards allows people to have low civic standards: if the politicians are corrupt, there is no reason to pay taxes. Likewise, the politicians who have been elected on the basis of being bad leaders have no incentive to go after tax cheats, the people who put them in office. Hence there is often a self-serving and self-maintaining aspect to making less than optimal decisions: by mutually selecting for low expectations, then everyone cooperates in forgiving bad behavior.
This account assumes that bad behavior of some sort is to be expected. If someone all of a sudden starts doing the ‘right thing’ it will be a breach of trust and violating the social norm. There would be a disincentive to repeat such a transaction again, because it challenges the stability of the assumed low quality interaction and implied forgiveness associated with it.
I like Origgi’s account of kakonomics, but I think there is something missing. The claim that localized ‘good interactions’ could threaten the status quo of bad behavior seems excessive. Criticizing someone who makes everyone else look bad does happen, but this only goes to show that the ‘right’ way of doing things is highly successful. It is the exception that proves the rule: only the people in power — those that can afford to misbehave — really benefit from maintaining the low status quo. Hence the public in general should not be as accepting of a low status quo as a social norm, though I am sure some do for exactly the reasons she stated.
This got me thinking that maybe there was another force at work here that would support a low status quo. When changing from one regime to another, it is not a simple switch from one set of outcomes to the other. There can be transitional instability, especially when dealing with governments, politics, economics, military, etc. If the transition between regimes is highly unstable (more so if things weren’t that stable to begin with) then there would be a disincentive to change: people won’t want to lose what they have, even if it is not optimal. Therefore risk associated with change can cause hyperbolic discounting of future returns, and make people prefer the status quo.
Adding high risk with the benefits of low standards could make a formidable combination. If there is a robust black market that pervades most of the society and an almost certain civil unrest given political change (throw in a heavy-handed police force, just for good measure), this could be strong incentive to not challenge an incumbent government.
First, he might be saying that though it is physically possible (by a fluke series of mutations, for example) for mentality to have come about, it would be better explained by teleology. (Let’s call this the “intelligibility” argument.)
Though Matthen was referring to doubts about Darwinism being sufficient to lead to consciousness, there is another way to understand this intelligibility argument. If we grant that consciousness is something very special, though not unphysical, someone might consider the laws of physics to be constructed, teleologically, to permit consciousness. This is to say that our physics is teleogically directed to account for consciousness. The claim is not that consciousness was necessitated by our physics, but that our physics must conform to allow the possibility of consciousness. What is one philosopher’s Nature is another’s Teleology.
Now, I can’t see any philosophical motivation for this outside of a very deep belief that consciousness is exceptionally special. But if we grant exceptional status to consciousness, then it wouldn’t be ridiculous to consider that our physics must somehow be subject to the requirements of consciousness instead of the other way around. Whereas there may be infinite other possible physics that do not allow for the possibility of consciousness, we live under a physics that does.
My immediate, knee jerk response to this sort of move is that it is just a semantic shift about the meanings of teleology and nature, nothing deeper. If what the teleologist means by teleology is what others mean by nature, then there is no difference of opinion, only word use.
However, this semantic response does not engage the motivation for the teleological argument. The motivation is that consciousness is exceptional. So, if the naturalist believes that consciousness is exceptional and entirely natural, then the naturalist is left with no natural explanation for why it is so exceptional. However the teleologist may say that consciousness is exceptional, subject to the laws of physics, but unsurprising, since the laws of physics itself are directed to allow for consciousness. Since the teleological account does a better job at explaining something as special as consciousness, it is preferable.
This conclusion about preferring the teleological explanation to the naturalistic one is based on the absolute assumption that consciousness is exceptional. But how exceptional must it be? Since we are making physics, and presumably the rest of science, subject to our assumption, then the reasons for our assumptions must then be ontologically more basic and more certain than our entire scientific understanding of the world.
Personally I do not have any basis for thinking consciousness is so special that all of science must be made to account for it. From my perspective, claiming that science must conform to consciousness is a post hoc ergo propter hoc fallacy, since I’d have to arbitrarily assume consciousness to be a fundamental substance and science to be constructed to allow for it.
However, there could be people who do have beliefs that strong. For them, they would not be arbitrarily assuming consciousness to be the more fundamental substance in the universe and hence it would follow that science should conform to it. Instead it would be a direct causal link: consciousness, therefore science that teleologically allows for consciousness. This kind of teleological naturalism is special in that it does not appeal to the unlikelihood or complexity of consciousness evolving, as is wont to happen nowadays, but is based on an ontological claim about consciousness. I don’t know if this is more defensible than the Intelligibility Argument based on likelihood, but, as it is different, perhaps it has a chance to fair better.
What comprises an ethical decision according to theory?
For the Consequentialist the crux is always in determining and executing the best consequences.* This means that making a consequentialist decision involves two steps. First is to imagine different possible futures and evaluate them. Once the evaluation is done, the consequentialist chooses the future scenario that maximizes the ‘Good’ (or what have you) and works towards realizing that scenario. Being moral is having skill in figuring out the best future and achieving that future.
The task in Deontology is to obey rules and imperatives. To follow a rule is to understand the rule, when it applies, and how you should act to be in accordance with it. Being moral is understanding imperatives and comporting yourself to act according to them.
In Virtue Ethics the goal is to be virtuous. We become virtuous by habit: by habituating ourselves in certain ways we change ourselves into the person we wish to become. Being moral is having undertaken the work to become the person we wished to be. Once we are the virtuous person we wish to be, whatever we decide to do is the moral thing.
From this short sketch we can set up some interesting oppositions. The first thing to notice is that both Deontology and Virtue Ethics are concerned with how an agent has changed themself in order to act morally. A virtuous person has habituated themself according to their idea of excellence and a deontologist has comported themself to act according to the rules. Though the target of the change is different, the act of self-change is common to both theories.
A consequentialist, however, is less concerned with changing themself and more interested in how they can effectively change the world. It doesn’t really matter to the consequentialist how the best scenario is achieved, so self-improvement is less important than world-change.
Consequentialism and deontology have something in common that virtue ethics does not. The two modern ethical systems both have an abstract standard for deciding what is moral. Consequentialists have a calculation of the good and deontologists have rule systems to obey. Both calculating the good and following a rule system can be thought of as an objective, independently evaluable procedure. Living the life you want to lead according to your virtues, does not require following an independent abstract procedure. It is, instead, based upon an understanding of human life.
As you can see by the way I have set up the opposition, I left a space at the bottom for an ethics in which is similar to consequentialism in that it looks to change the world, and similar to virtue ethics in that it is humanistic. Very recently I have been working on something I call Charity Ethics, and I believe it fills out the opposition nicely:
Charity Ethics is based upon increasing our empathy for others. To increase our empathy we have to act charitably towards each other; only by acting charitably do we have the opportunity to find common ground. So the fundamental decision of charity ethics is how to be more charitable so that we can find more common ground and hence become more empathic.
Like consequentialism the thing that needs to be changed is the world: we have to practice charity and find new ways to be charitable. Although the end goal of this practice is to become more empathic, this is not part of the ethical decision procedure- it is a consequence.
Charity Ethics is also humanistic like Virtue Ethics. Instead of using an abstract standard, each person must find ways to engage charitably with other people (and other organisms, potentially). It is through this charitable engagement with others that ethical decisions can be made and evaluated.
Lastly consider the remaining dimension, from the upper left to bottom right, which is a property that Virtue Ethics and Consequentialism agree on but oppose both Deontology and Charity Ethics. Perhaps there are other properties, but I lighted upon what I call ‘alienation.’ Alienation is how the ethics prioritizes individuals and groups.
Both consequentialism and virtue ethics are very accepting of an individualistic perspective. Under consequentialism a person is to maximize of the ‘good,’ which takes no account of personal, family or other social relations. An agent decides the best possible abstract outcome and acts accordingly. Likewise, virtue ethics is focused upon living an excellent life, which may mean different things for different people. How to live excellently is a personal decision and, hence, may not include personal, family or social relations.
Opposing this individualism is solidarity. A deontologist will likely take personal, family and social relations into account as part of their obligations. A parent will have an obligation to their child over the well-being of other children. Similarly Charity Ethics requires other individuals, else there would be no one to be charitable with. Moreover, since a person will have greater opportunity to help a child or friend, or close social group, these personal relations can be prioritized.
This opposition cross is based upon the ethical decision making under different theories. The similarity between Virtue Ethics and Deontology, with regard to how both seek to change the self, while Consequentialism is based on changing the world, is something I had never before considered. It might be a trivial issue, but, since it came directly from the question about making ethical decisions, it seems more significant. Also, Charity Ethics being a humanistic ethical theory that focuses on changing the world is nice both in the sense that it is new and different, and also that it fills out the chart in direct opposition to Consequentialism. The value of the chart will ultimately depend on the significance of initial question, but, even if we disregard it, this diagrammatic approach still provides some interesting ways to analyze the ethical theories.
Given an Object Oriented Ontology ethics can present a problem.* It is not obvious how to fit ethics into an object oriented view: even if objects have ethical properties, ethics itself has to be considered just as arbitrary as any other property. One could, of course, hold some Deontological, Consequentialist or other ethical viewpoint, but this position would have to be justified on other grounds, since O.O.O. is silent on the matter. Hence having ethics as an ad hoc ontological addition is a problem because it shows that Object Oriented Philosophy is inherently lacking an important part of human experience.
To achieve a more comprehensive viewpoint, while still being object oriented, a different ethical strategy must be taken.
Consider that the objects of our reality are both overdetermined and underdetermined (overmined/ undermined in Harman-y terms). This means that no matter how we think about our reality, there are multiple underlying phenomena and multiple overarching phenomena that can be understood to govern every part of our world. Often this is used to develop an argument supporting O.O.O., but I want to develop a different consequence.
By permanently securing multiple fundamental reasons for every phenomenon, no single reason has ultimate sway. We must, in principle, be ontologically humble.
This means that however much we learn about ourselves, there will always be more, multiple explanations, theories, and phenomena; we are forever interesting to ourselves.
To live with the expanding enormity of human experience, while never being able to fully come to terms with it, then we must forever re-explain and rediscover those unknown parts of ourselves. To do this we need charity. Charity for others, charity for ourselves, and charity for that which we do not understand, because we already know we do not fully understand. Having charity — extra time, patience and effort — when we explore (speculate on?) our reality lets us extend our experience into the unknown (the chaos, if you will), even in the face of theories that should completely determine phenomena. This gives us the opportunity to explore ourselves, others and other ways of life, to find new objects and phenomena, and new ways to be charitable, ad infinitum.
Therefore the same dilemma that Object Oriented Philosophy presents as its ontological support, also yields support for a concept of charity.
Charity, as described, has ethical teeth. Determining the charitable thing to do in a given situation tracks, at least to my mind, a typical normative ethical stance. Like deontology it can be seen as having space for moral indifference and praiseworthiness: not all acts are governed by charity, though certain actions can be seen as especially charitable. Also it has built in brakes. The principle of ontological humility prevents us from naively applying our personal understanding of charity to others, which means it would be wrong, for example, to donate one person’s organs (without their permission) to save others.
Granted, more work will have to be done to flesh out these ideas, but my hope is that this outline shows that charity can provide a promising start to an integrated ethics within Object Oriented Philosophy.
* I’m not sure how it happened, but my metaphysics has lead me to a similar position as the Object Oriented Philosophers, at least ontologically. So for the course of this post, I’m wearing my Object Oriented Philosopher Hat. My apologies if the arguments above are unique to my theories and not OOP in general, though this post makes me suspect I am not that far off.
Assume space-time is quantized. This would mean that space-time is broken up into discrete bits. It then follows that time is broken up into discrete bits.
This disagrees with basic experience: we can start counting time at any arbitrary point. “Now” could be any time whatsoever. Moreover, we run our physical experiments at any given point; we don’t have to wait to start our clocks.
But what if our ability to run experiments at any given point is just an illusion of our universe being broken up into such tiny bits that we just don’t notice the breaks?
Could we design an experiment to test when we can run experiments?
If time is continuous, we would never find any point at which we could not run an experiment. If time is not continuous, though, we would likewise never find any point at which we could not run an experiment, since all experiments would use clocks that start within that lockstep quantized time.
Hence we are unable to tell the difference between quantized and continuous time such that it always appears continuous.
However, even if time is continuous in this fashion, measurement of time is not. Since there is a lower limit to what we can distinguish between two different times, even if we are free to start measuring whenever we want, all subsequent measurements are physically dependent upon that initial fixed point. The second measurement must be outside the uncertainty associated with the initial measurement (the clock start) and the third must be outside the second, etc. Therefore all physically useful measurements of time (counting past zero, that is) are inherently physically quantized by their dependence upon the instantiation of measurement and limits of uncertainty.
If time is both continuous and discontinuous in this fashion, then so is all space-time.
This leads to the question of which is ontologically prior: if you hold that our reality is defined by what we can measure, then the universe is quantized and our experience pigeonholed; if you hold that our reality is defined by our phenomenal experience, then the universe is continuous and measurement is pigeonholing.
Either way it is a question of the metaphysics — not physics — of space-time. And without a way to distinguish between these options, no physical experiment will be able to settle the debate either, since we could always be chasing our metaphysical tails.
I’ve mulled over this issue concerning the logical limits of what can be measured by physics for years, but I never developed any conclusions. However, there has recently been discussion of the feasibility of a tabletop search for Planck scale signals. This nifty experiment seems deviously simple with the potential for novel results, so go check it out if you haven’t heard of it yet, for example in this discussion. One issue that the experiment bears upon is the continuity of space-time at the Planck Scale. My worry is that the above metaphysical distinction between counting zero and counting past zero may trip up the physicists’ search for the continuity or discontinuity at the fundamental levels of matter.
I don’t normally see cops smoke on duty, but lots of cops were smoking last week.
Beer was being sold for up to $30 a six pack. Not good beer either.
I overheard a barista at Verb Cafe in Williamsburg say that Tuesday had been their best day ever. They did twice their sales of a busy Saturday and closed early because they ran out of everything. He also said he saw a lot more Nouveau Yorkers than normal.
I smelled no more weed on the street than I normally do. Stoners are consistent.
The Brooklyn half of the Williamsburg bridge had power, but crossing into downtown Manhattan was like regressing into a time before electricity, or more accurately, a time after electricity. When it got dark at night, it actually got dark. Anyone who has been to lower Manhattan knows there is a limit to how dark it actually gets: the sheer amount of ambient light prevents real darkness, even in places without street lights. This no longer held for the few days after Sandy. Walking the city was passing through endless empty black canyons, devoid of life and filled with remnants of once useful technology.
Every so often I’d come upon a person sitting on a stoop, looking haggard and sucking hard on a cigarette. When this happened I wouldn’t notice the person till I was already upon them and walking by. I couldn’t even muster a head nod, not that New Yorkers would be looking for the social interaction, and it was inevitably too late to bother anyway.
My mom called while I was walking back to the bridge a few blocks south of Delancey. Surprisingly the cell phone coverage held for the duration of the call. I could hear her voice drop as I described the situation: The windows are empty and lifeless for blocks, and I can barely make out the sidewalk. There are no people, or none that I can see. Sometimes they would show up, but as I said, they were the strays, and would disappear just as quickly. The cops, wherever they were, were just as cut off as everyone else. She ended the call quickly.
They eventually got the power down to 14th street and east of Broadway back on. This returned some of the ambient light to lower Manhattan, but not like normal. Instead of the sad darkness, a weak, insubstantial haze took over. It was like being in an old video game where they just colored everything dark, but there were no actual light sources. You could see things, but it wasn’t like things were lit or had shadows; it was all shadows. Unlike the previous nights, which hurt in its collapse of basic New York reality, this haze provided an unreality to the situation. It was a transient state, a purgatory, one where you could feel civilization trying to leech its way back.
My friends who live and work uptown were barely inconvenienced by the storm.
banks and power
A bank was robbed clean by Upright Citizens entering the building’s basement and then breaking up through the floor.
I told everyone that if I had a truck I would have ripped up and ripped off those ubiquitous street ATMs that charge $4 a transaction. I’m actually surprised I didn’t see any of this.
Goldman Sachs had barricades of sandbags around their entrance ways. Not sure if they were trying to stem the barrage of water only.
They moved the power lines in the city under ground after the 1888 blizzard, which was the last time the stock exchange had been closed for 2 days due to weather. This was to prevent wind and snow from affecting the power supply. So maybe the banks will ‘encourage’ our utilities to make the power supply more water resistant. Cuomo (NY State Governor) is threatening to revoke the electricity monopolies of ConEd and LIPA due to the power failures. Floodproofing New York City would be an unimaginably huge project. I wouldn’t be surprised to see a proposal to actually raise the entire island of Manhattan. If the banks don’t have battery backup security cameras in a few weeks, though, I will be shocked.
Fauna in New York is sophisticated. The animals that live here are either well adapted to living with humans or well adapted to getting out of our way. However, when I saw a pigeon standing very still near the curb in the street, I felt something was wrong. A van pulled up and the front wheel missed the pigeon by not even a finger’s width, but the pigeon didn’t move at all. Then the rear wheel ran directly over the stationary pigeon with muffled bone crunches.
I walked into Washington Square Park and a very obese man followed me in. I sat on one side of the pathway and he sat across from me. Often, though not generally, people hanging around in public parks who don’t take care of themselves have mental problems. Then a large flock of pigeons, which is strange in itself, all descended upon this man. Standing on him, walking up and down his arms, crowding as close as possible to his body. I saw his face, he looked confused, which I took to confirm my suspicion about him. He noticed me looking and he spoke, completely lucidly: “I don’t even have food. What’s going on? I guess the birds are just as stir crazy as the rest of us…” He wasn’t crazy at all: the birds went Hitchcock on him, and he was trapped. I left Washington Square Park.
I only type up my philosophy writing when it is being prepared for general consumption, that is, no longer my own notes. Otherwise I write with a fountain pen, which I find to be the least intrusive and most versatile writing implement.
So I am at my brother’s place in Williamsburg as Sandy shakes the windows, hoping the power doesn’t go out — the internet and cable TV had failed, but not before we saw the footage of the 14th street power station explosion and cars floating on C. I lit a candle just in case.
As I am getting ready to go to sleep on his shockingly ludicrous couch (not his fault) I turn off the standing lamp, leaving the candle the only source of light. I think, “Hey, this is how people wrote in the past. Every philosopher up till just recent has sat hunkered over a notebook with a bottle of ink, a pen and a candle. Let’s see if there is anything to it…”
OH MY GAWD.
It is fantastic. Modern lighting is excellent, but it sprays light everywhere. Normally this is a good thing: one or two lamps can light an entire room easily. But for focused concentration, the single flickering point light of a candle melts everything else away. Romance is good for philosophy.
I saw this post by Mark Lance over at New APPS and he brought up one of the issues that I have recently been concerned with: What is a logical domain? He said:
So our ignorance of our domain has implications for which sentences are true. And if a sentence is true under one interpretation and false under another, it has different meanings under them. And if we don’t know which of these interpretations we intend, then we don’t know what we mean.
I am inclined to think that this is a really serious issue…
When we don’t know what we, ourselves, mean, I regard this as THE_PHILOSOPHICAL_BAD, the place you never want to be in, the position where you can’t even speak. Any issue that generates this sort of problem I regard as a Major Problem of Philosophy — philosophy in general, not just of its particular subject.
I prefer to use an artifact of Independence Friendly logic, the dependence indicator: a forward slash, /. The dependence indicator means that the quantifier only depends on those objects, variables, quantifiers or formulas specified. Hence
means that the variable x is randomly instantiated to Heads or Tails, since the only things that Яx is logically aware of are Heads and Tails. Therefore this too represents a coin flip, without having multiple domains.)
I used the dependence slash to indicate the exact domain that a specific quantification ranged over. This localized the domain to the quantifier. About a week after publishing this I realized that the structure of this pseudo-domain ought to be logically structured: (Heads, Tails) became (Heads OR Tails). The logical or mathematical domain, as an independent structure, can therefore be completely done away with. Instead a pseudo-domain must be specified by a set of logical or mathematical statements given by a dependence (or independence) relation attached to every quantifier.
∀x/((a or b or c) & (p → q))…
This means that instantiating x depends upon the individuals a, b or c, that is, x can only be a, b or c, and it also can only be instantiated if (p → q) already has a truth value. If ((p → q) → d) was in the pseudo-domain, then x could be instantiated to d if (p → q) was true; if ¬d was implied, then it would be impossible to instantiate x to d, even if d was implied in some other part of the pseudo-domain. Hence the pseudo-domain is the result of a logical process.
The benefit of this approach is that it better represents the changing state of epistemic access that a logical game player has at different times. You can have a general domain for things that exist across all game players and times that would be added to all the quantifier dependencies (Platonism, if you will), but localized pseudo-domains for how the situation changes relative to each individual quantification.
Moreover, the domain has become part of the logical argument structure and does not have an independent existence, meaning fewer ontological denizens. And, to answer the main question of this post, every domain is completely specified, both in content and structure.
I’m inclined to call this logic Domainless Independence Friendly logic, or DIF logic, but I really also like EIFL, like the French Tower: Epistemic Independence Friendly Logic. Calling this logic epistemic emphasizes the relative epistemic access each player has during the logical game that comes with the elimination of the logical domain.
[Full details at http://upcoming.yahoo.com/event/10911334/ ] Pyrrho: Pyrrhonian Skepticism in Diogenes LaertiusA Workshop in Ancient PhilosophyOctober 18/19 2013, Common Room of the Heyman Center, Columbia [...]