Columbia Workshop on Probability and Learning

When:
April 8, 2017 all-day
2017-04-08T00:00:00-04:00
2017-04-09T00:00:00-04:00
Where:
716 Philosophy Hall
116th St & Broadway
New York, NY 10027
USA
Cost:
Free

Gordon Belot (Michigan) – Typical!, 10am
Abstract. This talk falls into three short stories. The over-arching themes are: (i) that the notion of typicality is protean; (ii) that Bayesian technology is both more and less rigid than is sometimes thought.

Simon Huttegger (Irvine LPS) – Schnorr Randomness and Lévi’s Martingale Convergence Theorem, 11:45am
Abstract. Much recent work in algorithmic randomness concerns characterizations of randomness in terms of the almost everywhere
behavior of suitably effectivized versions of functions from analysis or probability. In this talk, we take a look at Lévi’s Martingale Convergence Theorem from this perspective. Levi’s theorem is of fundamental importance to Bayesian epistemology. We note that much of Pathak, Rojas, and Simpson’s work on Schnorr randomness and the Lebesgue Differentiation Theorem in the Euclidean context carries over to Lévi’s Martingale Convergence Theorem in the Cantor space context. We discuss the methodological choices one faces in choosing the appropriate mode of effectivization and the potential bearing of these results on Schnorr’s critique of Martin-Löf. We also discuss the consequences of our result for the Bayesian model of learning.

Deborah Mayo (VA Tech) – Probing With Severity: Beyond Bayesian Probabilism and Frequentist Performance, 2:45pm
Abstract. Getting beyond today’s most pressing controversies revolving around statistical methods and irreproducible findings requires scrutinizing underlying statistical philosophies. Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. A statistical inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of rejection and non-rejection, unwarranted links between statistical and substantive claims, and violations of model assumptions. I illustrate with some controversies surrounding the use of significance tests in the discovery of the Higgs particle in high energy physics.

Teddy Seidenfeld (CMU) – Radically Elementary Imprecise Probability Based on Extensive Measurement, 4:30pm
Abstract. This presentation begins with motivation for “precise” non-standard probability. Using two old challenges — involving (i) symmetry of probabilistic relevance and (ii) respect for weak dominance — I contrast the following three approaches to conditional probability given a (non-empty) “null” event and their three associated decision theories.
Approach #1 – Full Conditional Probability Distributions (Dubins, 1975) conjoined with Expected Utility.
Approach #2 – Lexicographic Probability conjoined with Lexicographic Expected Value (e.g., Blume et al., 1991)
Approach #3 – Non-standard Probability and Expected Utility based on Non-Archimedean Extensive Measurement (Narens, 1974).
The second part of the presentation discusses progress we’ve made using Approach #3 within a context of Imprecise Probability.

Be the first to reply

Leave a Reply

Your email address will not be published. Required fields are marked *