Apr
8
Sat
Columbia Workshop on Probability and Learning @ 716 Philosophy Hall
Apr 8 all-day

Gordon Belot (Michigan) – Typical!, 10am
Abstract. This talk falls into three short stories. The over-arching themes are: (i) that the notion of typicality is protean; (ii) that Bayesian technology is both more and less rigid than is sometimes thought.

Simon Huttegger (Irvine LPS) – Schnorr Randomness and Lévi’s Martingale Convergence Theorem, 11:45am
Abstract. Much recent work in algorithmic randomness concerns characterizations of randomness in terms of the almost everywhere
behavior of suitably effectivized versions of functions from analysis or probability. In this talk, we take a look at Lévi’s Martingale Convergence Theorem from this perspective. Levi’s theorem is of fundamental importance to Bayesian epistemology. We note that much of Pathak, Rojas, and Simpson’s work on Schnorr randomness and the Lebesgue Differentiation Theorem in the Euclidean context carries over to Lévi’s Martingale Convergence Theorem in the Cantor space context. We discuss the methodological choices one faces in choosing the appropriate mode of effectivization and the potential bearing of these results on Schnorr’s critique of Martin-Löf. We also discuss the consequences of our result for the Bayesian model of learning.

Deborah Mayo (VA Tech) – Probing With Severity: Beyond Bayesian Probabilism and Frequentist Performance, 2:45pm
Abstract. Getting beyond today’s most pressing controversies revolving around statistical methods and irreproducible findings requires scrutinizing underlying statistical philosophies. Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. A statistical inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of rejection and non-rejection, unwarranted links between statistical and substantive claims, and violations of model assumptions. I illustrate with some controversies surrounding the use of significance tests in the discovery of the Higgs particle in high energy physics.

Teddy Seidenfeld (CMU) – Radically Elementary Imprecise Probability Based on Extensive Measurement, 4:30pm
Abstract. This presentation begins with motivation for “precise” non-standard probability. Using two old challenges — involving (i) symmetry of probabilistic relevance and (ii) respect for weak dominance — I contrast the following three approaches to conditional probability given a (non-empty) “null” event and their three associated decision theories.
Approach #1 – Full Conditional Probability Distributions (Dubins, 1975) conjoined with Expected Utility.
Approach #2 – Lexicographic Probability conjoined with Lexicographic Expected Value (e.g., Blume et al., 1991)
Approach #3 – Non-standard Probability and Expected Utility based on Non-Archimedean Extensive Measurement (Narens, 1974).
The second part of the presentation discusses progress we’ve made using Approach #3 within a context of Imprecise Probability.

Apr
25
Tue
Agency in Structural Explanations of Social Injustice – Saray Alaya-Lopez @ CUNY Grad Center, rm 5414
Apr 25 @ 6:30 pm – 8:30 pm

April 25, Saray Alaya-Lopez (Cal. State, Sacramento), “Agency in Structural Explanations of Injustice.”  6:30-8:00pm, CUNY Graduate Center 5414.

May 23, Karen Jones (U. Melbourne), “Radical Consciousness and Epistemic Privilege.”  6:30-8:00pm, CUNY Graduate Center 5414.

Oct
2
Mon
Phenomenology of Probability, Noah Greenstein (ME!) @ CUNY Grad Center, rm 3209
Oct 2 @ 4:15 pm – 6:15 pm

An account of fairness and probability is given using Game Theoretical Semantics to schematize fairness as a “draw” result of a logical game. The two concepts of probability — objective frequency vs. subjective belief — are then described as differences in game strategy. Lastly the logical machinery is used to potentially bridge the gap between the two, giving perspective on the problem of induction.

Logic and Metaphysics Workshop Fall 2017:

September 11 Lovett, NYU

September 18 Skiles, NYU

September 25 Jago, Nottingham

October 2 Greenstein, Private Scholar

October 9 GC Closed. No meeting

October 16 Ripley UConn

October 23 Mares, Wellington

October 30 Woods, Bristol

November 6 Hamkins, GC

November 13 Silva, Alagoas

November 20 Yi, Toronto

November 27 Malink, NYU

December 4 Kivatinos, GC

Oct
27
Fri
“Probabilistic Knowledge and Legal Proof” Sarah Moss (Univ. of Michigan) @ NYU Philosophy Dept. rm 202
Oct 27 @ 3:30 pm – 5:30 pm

Abstract: Traditional theories of knowledge often focus on the epistemic status of full beliefs. In Probabilistic Knowledge (forthcoming), I argue that like full beliefs, credences and other probabilistic beliefs can constitute knowledge. This talk applies probabilistic knowledge to problems in legal and moral philosophy. I begin by arguing that legal standards of proof require knowledge of probabilistic contents. For instance, proof by a preponderance of the evidence requires the factfinder to have greater than .5 credence that a defendant is liable, and also requires this probabilistic belief to be knowledge. Proof of guilt beyond a reasonable doubt requires knowledge of a significantly stronger content. The fact that legal proof requires knowledge explains why merely statistical evidence is insufficient to license a legal verdict of liability or guilt. In addition to explaining the limited value of statistical evidence, probabilistic knowledge enables us to articulate epistemic norms that are violated by acts of racial and other profiling. According to these norms, it can be epistemically wrong to infer from statistics that a person of Mexican ancestry is likely undocumented, for instance, even when inferring parallel facts about ordinary objects is perfectly okay.

Reception to follow in 6th floor lounge.

Nov
3
Fri
“Responsibility with a Buddhist Face” Daniel Breyer (Illinois State University) @ Columbia Religion Dept. rm 101
Nov 3 @ 5:30 pm

I’ve argued that the Indian Buddhist tradition, broadly construed, has tended to endorse a unique view of freedom and responsibility, a view I’ve called Buddhist Perspectivalism. According to this view, we should always regard ourselves as genuinely free and responsible agents, because we have good reason to do so, while we should never regard others in this way, because we have equally good reason to see them as neither free nor responsible. In this talk, I clarify Buddhist Perspectivalism as a theory of moral responsibility and defend it against some concerns that scholars like Christopher Gowans and Charles Goodman have raised.

With a response from:

Rick Repetti (Kingsborough Community College, CUNY)

 

Columbia Society for Comparative Philosophy:

Oct. 6: Jake Davis (New York University)

Nov. 3: Daniel Breyer (Illinois State University)

Dec. 8: Nico Silins (Cornell University) and Susanna Siegel (Harvard University)

Nov
10
Fri
Entropy and Insufficient Reason – Anubav Vasudevan (University of Chicago) @ Faculty House, Columbia U
Nov 10 @ 4:00 pm – 6:00 pm

One well-known objection to the principle of maximum entropy is the so-called Judy Benjamin problem, first introduced by van Fraassen (1981). The problem turns on the apparently puzzling fact that, on the basis of information relating an event’s conditional probability, the maximum entropy distribution will almost always assign to the event conditionalized on a probability strictly less than that assigned to it by the uniform distribution. In this paper, I present an analysis of the Judy Benjamin problem that can help to make sense of this seemingly odd feature of maximum entropy inference. My analysis is based on the claim that, in applying the principle of maximum entropy, Judy Benjamin is not acting out of a concern to maximize uncertainty in the face of new evidence, but is rather exercising a certain brand of epistemic charity towards her informant. This charity takes the form of an assumption on the part of Judy Benjamin that her informant’s evidential report leaves out no relevant information. I will explain how this single assumption suffices to rationalize Judy Benjamin’s behavior. I will then explain how such a re-conceptualization of the motives underlying Judy Benjamin’s appeal to the principle of maximum entropy can further our understanding of the relationship between this principle and the principle of insufficient reason. I will conclude with a discussion of the foundational significance for probability theory of ergodic theorems (e.g., de Finetti’s theorem) describing the asymptotic behavior of measure preserving transformation groups. In particular, I will explain how these results, which serve as the basis of maximum entropy inference, can provide a unified conceptual framework in which to justify both a priori and a posteriori probabilistic reasoning.

We will be having dinner right after the meeting at the faculty house. Please let Robby (jrf2162@columbia.edu) know if you will be joining us so that he can make an appropriate reservation (please be advised that at this point the university only agrees to cover the expenses of the speaker and the rapporteur and that the cost for all others is $30, payable by cash or check).

https://philevents.org/event/show/37746

Dec
1
Fri
‘You Only Live Once: The Philosophical Case’ Nick Riggle (San Diego) @ Faculty Delegate Assembly room, Hunter West
Dec 1 @ 4:30 pm

People feel on occasion that life should be embraced in a certain way. You only live once, carpe diem, #YOLO: we commonly associate the thought of our limited lives with the thought that we should take adventures, risks, or break with our routines and norms. But how, if at all, does the thought that you only live once motivate adventurous, risky, or unusual behavior? After all, having only one life seems to equally well motivate the exact opposite of adventure and risk. I consider several ways of supporting the thought that life should be embraced. All are found wanting, except one.

Dec
2
Sat
Being Awesome, Getting Stoked: A Conversation with Nick Riggle and Aaron James @ McNally Jackson Books
Dec 2 @ 7:00 pm

Join us for an evening of accessible philosophical thought and erudite fun. Former pro skater and USD philosophy professor Nick Riggle’s debut title, On Being Awesome: A Unified Theory of How Not to Suck draws on pop culture, politics, history, and sports to to illuminate the ethics and culture of awesomeness and pinpoint its origins in America. Philosopher Aaron James (UC Irvine), a longtime globetrotting surfer and author of the bestselling Assholes: A Theory, returns with Surfing with Sartre: An Aquatic Inquiry Into a Life of Meaning, using the experience and the ethos of surfing to explore key concepts in philosophy. Join Nick and Aaron in conversation followed by a reception and book signing.

Dec
8
Fri
The Price of Broadminded Probabilities and the Limitation of Science – Haim Gaifman (Columbia) @ Faculty House, Columbia U
Dec 8 @ 4:10 pm

A subjective probability function is broadminded to the extent that it assigns positive probabilities to conjectures that can be possibly true. Assigning to such a conjecture the value 0 amounts to a priori ruling out the possibility of confirming the conjecture to any extent by the growing evidence. A positive value leaves, in principle, the possibility of learning from the evidence. In general, broadmindedness is not an absolute notion, but a graded one, and there is a price for it: the more broadminded the probability, the more complicated it is, because it has to assign non-zero values to more complicated conjectures. The framework which is suggested in the old Gaifman-Snir paper is suitable for phrasing this claim in a precise way and proving it. The technique by which this claim is established is to assume a definable probability function, and to state within the same language a conjecture that can be possibly true, whose probability is 0.

The complexity of the conjecture depends on the complexity of the probability, i.e., the complexity of the formulas that are used in defining it. In the Gaifman-Snir paper we used the arithmetical hierarchy as a measure of complexity. It is possible however to establish similar results with respect to a more “down to earth” measures, defined in terms of the time that it takes to calculate the probabilities, with given precisions.

A claim of this form, for a rather simple setup, was first proven by Hilary Putnam in his paper ““Degree of Confirmation” and inductive logic”, which was published in the 1963 Schilpp volume dedicated to Carnap. The proof uses in a probabilistic context, a diagonalization technique, of the kind used in set theory and in computer science. In the talk I shall present Putnam’s argument and show how diagonalization can be applied in considerably richer setups.

The second part of the talk is rather speculative. I shall point out the possibility that there might be epistemic limitations to what human science can achieve, which are imposed by certain pragmatic factors ‒ such as the criterion of repeatable experiments. All of which would recommend a skeptic attitude.

Feb
6
Wed
The Extended Self: Autonomy and Technology in the Age of Distributed Cognition, Ethan Hallerman (Stony Brook) @ Brooklyn Public Library
Feb 6 @ 7:30 pm

In Philosophy in the Library, philosophers from around the world tackle the big questions. In February, we hear from Ethan Hallerman.

None of us today can avoid reflecting on the way our thoughts and habits relate to the tools we use, but interest in how technologies reshape us is both older and broader than contemporary concerns around privacy, distraction, addiction, and isolation. For the past hundred years, scholars have investigated the historical role of everyday technologies in making new forms of experience and senses of selfhood possible, from at least as early as the invention of writing. In recent years, philosophers have considered how our understanding of agency and mental states should be revised in light of the role that the technical environment plays in our basic activities. Here, we will look at how some models of the mind illuminate the results of the philosophy of technology to clarify the relationship between technology and the self.

Ethan Hallerman is a doctoral student in philosophy at Stony Brook University. He lives in New York where he prowls the sewers at night, looking for his father.