5 days to the big exam! We get to use 10 pages of notes, doublesided. It's way too much for my taste, but it's an interesting exercise, since it encourages you to think about relations between topics.
Asymptotics of Estimators (asymptotic consistency, normality) * MLE: for regular parameter, asymptotically normal, with rate 1/sqrt(n). * MLE: for truncation parameter, asymptotically exponential, with rate 1/n or worse. * If family has both types of parameters, we cannot(?) use the Fisher Information to find the asymptotic variance of the regular one. But can't we plug in the true value of the truncation one, and use the asymptotics of the regular subfamily? * Consistency is guaranteed if n/p > infinity, plus a few other conditions ("for all theta, the density is bounded" should suffice) * UMVUE: when is it asymptotically equivalent to the MLE? * Sample quantiles * Estimating Equations, a.k.a. no closed form for the MLE (e.g. Beta, Gamma, GLMs). Van der Vaart proves consistency (5.10), normality (5.19).
Fisher Information * Why the two formulas are equivalent. * Delta Method (for Rˆn > Rˆm functions, we can easily generalize it using Jacobians!) * Why knowing the nuisance parameter decreases the asymptotic variance of the parameter of interest * Why locationscale families of *symmetric* distrs have a diagonal information matrix. * CramérRao Inequality, about *unbiased* estimators, comes from CauchySchwarz. Not asymptotic: holds for every n! Equality is attained when we have "linear dependence". In other words, I think this means that an unbiased estimator U will be efficient iff can be written as: U = a*MLE + c. * Compare: variance bounds for unbiased estimators vs other estimators. * What if calculating the Fisher information is intractable? * ATTENTION: is this for a single observation or for the whole sample?
Taylor Expansions * How the asymptotic normality comes from onestep NewtonRaphson. * Why asymptotics of likelihood ratio is ChiSquare. * Delta Method, and how if the first derivative is zero, we get slower convergence to a ChiSquared. * Edgeworth Expansions
Testing * Simple vs Simple: NeymanPearson. * Simple vs Composite: compute MLE of the alternative. * Composite vs Composite: MLE of the null (a.k.a. least favorable distribution) * UMP: Monotone Likelihood Ratio on the *sufficient* statistic implies that I{T>c} is UMP. * UMPU: Power function has slope 0. Is it a mixture of two UMPs? * LMP: maximize the derivative of the power function at the boundary. * Asymptotic power under contiguous alternatives: projections, nonCentral ChiSquared (I might need more practice with basic power calculations first!)
UMVUE * Do there exist simple conditions for existence or nonexistence?? For location families, UMVUEs for the location parameter should always exist: U = MLE + constant. * If U is unbiased for 0 and T is UMVUE, then Cov(U,T) = 0.
Confidence Intervals * Studentized Intervals * Bootstrap intervals * Many options for the Fisher information: Fisher Information at MLE, observed Fisher Information, etc. The observed Fisher Information may be biased. If the bias is positive, the resulting coverage probability will be below 1alpha (but maybe the coverage probability converges to 1alpha). If the bias is negative, the intervals will be conservative. * Variancestabilizing transformations. Do we get better intervals this way? These intervals will be asymmetrical. * What if the first derivative of g is near zero at the MLE?
Linear Models * Is S^2 always independent of betahat? Why? * Why is the Ftest equivalent to the Ttest, and to the Likelihood Ratio Test? * Review matrix calculus * Simultaneous Confidence Intervals (studentized maximum modulus, studentized range distributions)
Nonparametrics * Complete sufficient statistics for nonparametric families (e.g. all distrs, symmetric distrs, meanzero distrs, etc) * Kernel Regression * Kernel Density Estimation (work out the bias!) * Ustatistics, and using projections to obtain asymptotic normality
Bayesian * Review exam problem on MetropolisHastings * Bayes Risk: may be minimized by posterior mean, median, mode, depending on the loss function
Probability Facts * Distributions: pdfs, cdfs, means, variances * Relations between distributions: conjugacy, convolutions, scaling * Law of Total Covariance * Joint distribution between minimum and maximum order statistics. * Inequalities: Markov, Chebyshev, Jensen * Dominated Convergence / Monotone Convergence: swap limit and integral.
Calculus facts * (1 + x/n)^n > e^x * \sum_k x^k / k! = e^x * \sum_{k=0}^n p^k = (1  (p^n+1)) / (1  p) * \sum_{k=0}^n k p^k =
mirror of this post
