* counterfactuals: should be well-defined, but often aren't in the social science literature (e.g. to answer "what is the effect of marriage on health?", we'd have to imagine interventions that cause or prevent people from getting married; there are many non-equivalent ways to do this)
* potential outcomes: formalism in which all subjects are considered to have missing data for all but one experimental condition (i.e. the one that they were assigned to). This provides a direct way of thinking about token causation (a.k.a. causes-of-effects).
* ignorability: an important assumption that makes causal inference possible, similar to a missing-at-random assumption.
* propensity score matching: a way of coping when ignorability fails. (See also: Inverse probability weighting)
Directed graphical models perhaps provide something like a more concrete mechanism, allowing us to simulate the effects of interventions and propagate them downstream. But as far as real applications are concerned, papers in this tradition tend to make assumptions less explicit, and tend to mislead practitioners into thinking that the required assumptions are satisfied. (See Dawid - "Beware of the DAG")
UPDATE: Cosma Shalizi writes:
<< You've read Pearl's Statistics Surveys paper, right? I think the critique of the potential outcomes framework there, in section 4, is very strong. (Look at the stuff on ignorability, especially.) As for propensity matching, when the set of covariates you're using to calculate propensities doesn't meet the back door criterion, well, you get results like this. >>
mirror of this post