^{n}, and even in L

^{2}! We defined Brownian Motion, and looked at sample paths as functions of time: for any fixed ω, the corresponding sample path is t ↦ X

_{t}(ω).

We showed the existence of Brownian Motion by the Daniell-Kolmogorov Extension Theorem, which can be proven using the Carathéodory Extension Theorem (whatever that means).

We also defined notions of equality of stochastic processes. For simple random variables, the usual notions are:

* absolute equality (my coinage): for all ω . X(ω)=Y(ω)

* almost-sure equality: the set {ω | X(ω)=Y(ω)} has measure 1, also written P(X=Y) = 1.

* equality in distribution: for all A, P(X ∈ A) = P(Y ∈ A)

Now, stochastic processes have a time index.

The notion of absolute equality can be developed very easily:

* X and Y are "absolutely equal" (again, my coinage) if ∀ ω, t . X

_{t}(ω) = Y

_{t}(ω)

But, in the case of continuous-time stochastic processes, the notion of almost-sure equality breaks into two non-equivalent notions, which differ on the placement of the quantifier ∀:

* X and Y are said to be

*indistinguishable*if almost all the sample paths agree, i.e. if P({ω | ∀ t ∈ T . X

_{t}(ω) = Y

_{t}(ω)}) = 1,

* X and Y are said to be

*modifications*of each other if at every time, they are equal almost surely, i.e. ∀ t ∈ T, P({ω | X

_{t}(ω) = Y

_{t}(ω)}) = 1,

Equality in distribution develops into:

* if X and Y have the same finite-dimensional distributions (FDDs), they are said to be

*versions*of each other.

(I suspect there is compactness-type result somewhere, to the effect that having the same FDDs implies having the same infinite-dimensional distributions too)

---

Someone said the style of the lecture was very "French", in the sense that he only gave us a bunch of ideas, little detail, no proofs. I'm totally ok with that, since proofs are best done at the privacy of your own home. The issue, of course, is making sure that one's derivation skills don't fall hopelessly behind. Hopefully the TA will put us through exercises.

The mathematical level seems to be uncomfortably high for most statisticians. A few people were lost when he defined consistency in terms of a "pushforward" (myself included).

mirror of this post