Chebyshev’s inequality and the Borel-Cantelli lemma are seemingly disparate results from probability theory but they combine beautifully in demonstrating a curious property of Brownian motion: that it has finite quadratic variation even though it has unbounded linear variation. Not only do the proofs of Chebyshev’s inequality and the Borel-Cantelli lemma have some interesting features themselves, but their application to the variation of Brownian motion also provides a nice example of how to prove explicitly that a sequence of random variables converges almost surely by showing that the probability of the divergence set is zero. In this note I want to bring out this last aspect in particular. (Quick reviews of lim sup, lim inf and the link between almost sure convergence and limp sup are provided in Appendices I and II at the end of this note).
There are two equivalent forms of Chebyshev’s inequality. Letting denote an integrable random variable with mean
and finite non-zero variance
, one form of Chebyshev’s inequality says
where is a real number.
Equivalently, if we let , then we can write Chebyshev’s inequality as
Proof of Chebyshev’s inequality
The following is a proof of the second form. Since , we have
Let so that
.
Let
If then
and
(by definition of ). However, if
then
so it is still true that
Thus, this inequality always holds. Taking expectations we get
Division of both sides by gives Chebyshev’s inequality.
The Borel-Cantelli lemma actually consists of a pair of results. Let be a sequence of events, i.e., a sequence of subsets of
. Then
is the set of all elements that are in an infinity of the subsets
. The Borel-Cantelli lemma says the following:
(a) If then
(b) If is independent and
then
Proof of the Borel-Cantelli lemma
(a) For any integer , we can write
(the last inequality by subadditivity). Since converges, the tail sums
as
. Therefore (by a ‘sandwich’ argument)
.
(b) We have
(by De Morgan’s laws). Therefore
By the product rule for independent events,
Now, for all we have
. (To see this, let
. Then
and for
we have
. Therefore
is an increasing function, so
for
). It follows that
and therefore also
But as
. Therefore
and
, so
, so the lemma is proved.
To apply these results to the variation of Brownian motion, let
be a partition of the interval . Define the mesh of the partition
by
Let us restrict ourselves only to sequences of partitions for which
, and in particular for which
. For every
let
and let
where is a real-valued function defined on
. If
we will say that
has finite
variation on
. In the case
, if
we will say that
is of bounded linear variation on
. In the case
, if
we will say that
is of bounded quadratic variation on
If we replace in the above by a Brownian motion
, then we find a curious result:
is of unbounded linear variation almost surely, but it has a finite quadratic variation equal to
almost surely. In other words,
(a.s.)
(a.s.)
We will use Chebyshev’s inequality and the Borel-Cantelli lemma to prove these.
Theorem 1
If is a sequence of partitions of
with
(and therefore
), then
(a.s.)
as . In other words, Brownian motion has bounded quadratic variation on
equal to
, almost surely.
Proof:
Observe that, by ‘telescoping’,
Let
Note that because
. Therefore
(cross product terms vanish since Brownian motion has independent increments)
(since the fourth moment of a zero-mean normal random variable is and here
)
(Since , we see already at this stage that
in
).
By the second form of Chebyshev’s inequality we can now write for any that
In particular, we can write for any integer that
By the first result in the Borel-Cantelli lemma we can then say
where
Now,
so if then we must also have
But
is the divergence set when considering the almost sure convergence of to zero. Since the probability of the divergence set is zero, we conclude that
a.s., and therefore
(a.s.), as required.
Theorem 2
If is a sequence of partitions of
with
, then
(a.s.)
as . In other words, Brownian motion has unbounded linear variation on
, almost surely.
Proof:
By contradiction. Suppose Brownian motion has bounded linear variation, i.e., . Then we can write
Since is uniformly continuous on
(
is continuous on
and any continuous function is uniformly continuous when restricted to a compact set) we can write
as
which if implies
(a.s.)
This contradicts Theorem 1. Therefore cannot have bounded linear variation and the theorem is proved.
Appendix I Notes on lim sup and lim inf
Let be a sequence of events, i.e., a sequence of subsets of
.
the set of elements that are in an infinite number of the subsets
the set of elements that are in all but a finite number of the subsets
Note that
but the inclusion does not go in the opposite direction because
can be in an infinity of sets without being in all but a finite number of them, e.g,
can be in all odd numbered sets
, but not in even numbered ones.
If we say the sequence has a limit
.
We can express lim inf and lim sup in terms of unions and intersections of members of the sequence as follows.
To understand the first expression, note that is in all but a finite number of sets if for some
it is in all
for
which we denote
. The
part means “for some
” and the
part means in all
for
.
Similarly, for the second expression, is in an infinity of the sets if for every
it is in some of the
for
which we denote
.
Appendix II Almost sure convergence and its link with lim sup
A sequence of random variables is said to converge almost surely to a random variable
iff the probability of the divergence set is zero.
The sequence fails to converge almost surely to
iff for some
and for any
there is some
such that
(so no matter how large I make , there will always be some
for which this inequality holds).
Let . Then the divergence set is the set of outcomes for which the above divergence condition is true, which we write as
We can also write the divergence set using lim sup as
A common way to prove almost sure convergence is therefore to prove that
