Science and Philosophy of Science:
Where Do/Should They Meet in 2010
SOME TENTATIVE QUESTIONS:
The Current Landscape in Philosophical Foundations of Statistics
What is the nature and justification of recent attempts at bridges,
reconciliations, and unifications between frequentist and Bayesian
statistics? are such unifications possible? desirable? (is "machine
learning" a distinct paradigm?)
What are the central problems of frequentist statistics (of various
sorts) and Bayesian statistics (subjective, reference, epistemic,
other); in what ways do contrasting philosophies of statistics reflect
rival conceptions of the nature and role(s) of probability in reaching
How are shifts in philosophy of science related to shifts in philosophy
of statistics? How are shifts in statistical techniques and problems
related to shifts in foundations? Are any of the old "statistics wars"
relevant? (Have the wars been won or lost?)
How do contrasting statistical philosophies interconnect with different
positions on key statistical principles (e.g., likelihood principle,
error statistical principles, stopping rule principles, principles on
II. Foundational Issues in Model Specification, Selection, and
What assumptions underlie popular methods of model selection (e.g., AIC,
BIC, HQIC, Autometrics, Probabilistic Reduction)? Does their leading to
different models reflect contrasting underlying statistical
philosophies? Or are the debates largely pragmatic?
In what ways do advances in statistical techniques redefine traditional
philosophical problems about induction and scientific discovery? Can
they solve (or merely re-construe) long-standing philosophical problems
about induction, learning, discovery?
When should we take account of "data-dependent" model specifications?
and if so, how?: can statistical modeling techniques shed light on the
puzzles about use-novelty, double-counting, selection effects?
III. Statistical and substantive inference: using statistical methods
in evidence-based practice and policy (e.g., medicine, economics)
What is the nature of statistical generalization? how is it related to
"substantive" inference, theory testing, confirmation in science?
How are statistical notions of reliability and relevance connected to
reliability and relevance more generally? (e.g., relevance of randomized
control trials in making inferences "in the wild")
Is learning about reliable patterns and statistical regularities always
intermediate to obtaining scientific knowledge, or can it capture a
direct goal of importance to science?