SlideShare a Scribd company logo
Probability
About these notes. Many people have written excellent notes for introductory
courses in probability. Mine draw freely on material prepared by others in present-
ing this course to students at Cambridge. I wish to acknowledge especially Geoffrey
Grimmett, Frank Kelly and Doug Kennedy.
The order I follow is a bit different to that listed in the Schedules. Most of the material
can be found in the recommended books by Grimmett & Welsh, and Ross. Many of the
examples are classics and mandatory in any sensible introductory course on probability.
The book by Grinstead & Snell is easy reading and I know students have enjoyed it.
There are also some very good Wikipedia articles on many of the topics we will consider.
In these notes I attempt a ‘Goldilocks path’ by being neither too detailed or too brief.
• Each lecture has a title and focuses upon just one or two ideas.
• My notes for each lecture are limited to 4 pages.
I also include some entertaining, but nonexaminable topics, some of which are unusual
for a course at this level (such as random permutations, entropy, reflection principle,
Benford and Zipf distributions, Erdős’s probabilistic method, value at risk, eigenvalues
of random matrices, Kelly criterion, Chernoff bound).
You should enjoy the book of Grimmett & Welsh, and the notes notes of Kennedy.
Printed notes, good or bad? I have wondered whether it is helpful or not to
publish full course notes. On balance, I think that it is. It is helpful in that we can
dispense with some tedious copying-out, and you are guaranteed an accurate account.
But there are also benefits to hearing and writing down things yourself during a lecture,
and so I recommend that you still do some of that.
I will say things in every lecture that are not in the notes. I will sometimes tell you when
it would be good to make an extra note. In learning mathematics repeated exposure
to ideas is essential. I hope that by doing all of reading, listening, writing and (most
importantly) solving problems you will master and enjoy this course.
I recommend Tom Körner’s treatise on how to listen to a maths lecture.
i
Contents
About these notes i
Table of Contents ii
Schedules vi
Learning outcomes vii
1 Classical probability 1
1.1 Diverse notions of ‘probability’ . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Classical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Sample space and events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Equalizations in random walk . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Combinatorial analysis 6
2.1 Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Sampling with or without replacement . . . . . . . . . . . . . . . . . . . 6
2.3 Sampling with or without regard to ordering . . . . . . . . . . . . . . . 8
2.4 Four cases of enumerative combinatorics . . . . . . . . . . . . . . . . . . 8
3 Stirling’s formula 10
3.1 Multinomial coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Improved Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Axiomatic approach 14
4.1 Axioms of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Boole’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Inclusion-exclusion formula . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Independence 18
5.1 Bonferroni’s inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Independence of two events . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3 Independence of multiple events . . . . . . . . . . . . . . . . . . . . . . . 20
5.4 Important distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.5 Poisson approximation to the binomial . . . . . . . . . . . . . . . . . . . 21
6 Conditional probability 22
6.1 Conditional probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Properties of conditional probability . . . . . . . . . . . . . . . . . . . . 22
6.3 Law of total probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4 Bayes’ formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.5 Simpson’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
ii
7 Discrete random variables 26
7.1 Continuity of P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Discrete random variables . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.4 Function of a random variable . . . . . . . . . . . . . . . . . . . . . . . . 29
7.5 Properties of expectation . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8 Further functions of random variables 30
8.1 Expectation of sum is sum of expectations . . . . . . . . . . . . . . . . . 30
8.2 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
8.3 Indicator random variables . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.4 Reproof of inclusion-exclusion formula . . . . . . . . . . . . . . . . . . . 33
8.5 Zipf’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9 Independent random variables 34
9.1 Independent random variables . . . . . . . . . . . . . . . . . . . . . . . . 34
9.2 Variance of a sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9.3 Efron’s dice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9.4 Cycle lengths in a random permutation . . . . . . . . . . . . . . . . . . 37
10 Inequalities 38
10.1 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.2 AM–GM inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.3 Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.4 Covariance and correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.5 Information entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
11 Weak law of large numbers 42
11.1 Markov inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
11.2 Chebyshev inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
11.3 Weak law of large numbers . . . . . . . . . . . . . . . . . . . . . . . . . 43
11.4 Probabilistic proof of Weierstrass approximation theorem . . . . . . . . 44
11.5 Benford’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
12 Probability generating functions 46
12.1 Probability generating function . . . . . . . . . . . . . . . . . . . . . . . 46
12.2 Combinatorial applications . . . . . . . . . . . . . . . . . . . . . . . . . 48
13 Conditional expectation 50
13.1 Conditional distribution and expectation . . . . . . . . . . . . . . . . . . 50
13.2 Properties of conditional expectation . . . . . . . . . . . . . . . . . . . . 51
13.3 Sums with a random number of terms . . . . . . . . . . . . . . . . . . . 51
13.4 Aggregate loss distribution and VaR . . . . . . . . . . . . . . . . . . . . 52
13.5 Conditional entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
iii
14 Branching processes 54
14.1 Branching processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
14.2 Generating function of a branching process . . . . . . . . . . . . . . . . 54
14.3 Probability of extinction . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
15 Random walk and gambler’s ruin 58
15.1 Random walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
15.2 Gambler’s ruin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
15.3 Duration of the game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
15.4 Use of generating functions in random walk . . . . . . . . . . . . . . . . 61
16 Continuous random variables 62
16.1 Continuous random variables . . . . . . . . . . . . . . . . . . . . . . . . 62
16.2 Uniform distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
16.3 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
16.4 Hazard rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
16.5 Relationships among probability distributions . . . . . . . . . . . . . . . 65
17 Functions of a continuous random variable 66
17.1 Distribution of a function of a random variable . . . . . . . . . . . . . . 66
17.2 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
17.3 Stochastic ordering of random variables . . . . . . . . . . . . . . . . . . 68
17.4 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
17.5 Inspection paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
18 Jointly distributed random variables 70
18.1 Jointly distributed random variables . . . . . . . . . . . . . . . . . . . . 70
18.2 Independence of continuous random variables . . . . . . . . . . . . . . . 71
18.3 Geometric probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
18.4 Bertrand’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
18.5 Buffon’s needle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
19 Normal distribution 74
19.1 Normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
19.2 Calculations with the normal distribution . . . . . . . . . . . . . . . . . 75
19.3 Mode, median and sample mean . . . . . . . . . . . . . . . . . . . . . . 76
19.4 Distribution of order statistics . . . . . . . . . . . . . . . . . . . . . . . . 76
19.5 Stochastic bin packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
20 Transformations of random variables 78
20.1 Transformation of random variables . . . . . . . . . . . . . . . . . . . . 78
20.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
20.3 Cauchy distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
21 Moment generating functions 82
iv
21.1 What happens if the mapping is not 1–1? . . . . . . . . . . . . . . . . . 82
21.2 Minimum of exponentials is exponential . . . . . . . . . . . . . . . . . . 82
21.3 Moment generating functions . . . . . . . . . . . . . . . . . . . . . . . . 83
21.4 Gamma distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
21.5 Beta distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
22 Multivariate normal distribution 86
22.1 Moment generating function of normal distribution . . . . . . . . . . . . 86
22.2 Functions of normal random variables . . . . . . . . . . . . . . . . . . . 86
22.3 Bounds on tail probability of a normal distribution . . . . . . . . . . . . 87
22.4 Multivariate normal distribution . . . . . . . . . . . . . . . . . . . . . . 87
22.5 Bivariate normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
22.6 Multivariate moment generating function . . . . . . . . . . . . . . . . . 89
23 Central limit theorem 90
23.1 Central limit theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
23.2 Normal approximation to the binomial . . . . . . . . . . . . . . . . . . . 91
23.3 Estimating π with Buffon’s needle . . . . . . . . . . . . . . . . . . . . . 93
24 Continuing studies in probability 94
24.1 Large deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
24.2 Chernoff bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
24.3 Random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
24.4 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A Problem solving strategies 98
B Fast Fourier transform and p.g.fs 100
C The Jacobian 101
D Beta distribution 103
E Kelly criterion 104
F Ballot theorem 105
G Allais paradox 106
H IB courses in applicable mathematics 107
Index 107
Richard Weber, Lent Term 2014
v
This is reproduced from the Faculty handbook.
Schedules
All this material will be covered in lectures, but in a slightly different order.
Basic concepts: Classical probability, equally likely outcomes. Combinatorial analysis, per-
mutations and combinations. Stirling’s formula (asymptotics for log n! proved). [3]
Axiomatic approach: Axioms (countable case). Probability spaces. Inclusion-exclusion
formula. Continuity and subadditivity of probability measures. Independence. Binomial,
Poisson and geometric distributions. Relation between Poisson and binomial distributions.
Conditional probability, Bayes’ formula. Examples, including Simpson’s paradox. [5]
Discrete random variables: Expectation. Functions of a random variable, indicator func-
tion, variance, standard deviation. Covariance, independence of random variables. Generating
functions: sums of independent random variables, random sum formula, moments. Conditional
expectation. Random walks: gambler’s ruin, recurrence relations. Difference equations and
their solution. Mean time to absorption. Branching processes: generating functions and ex-
tinction probability. Combinatorial applications of generating functions. [7]
Continuous random variables: Distributions and density functions. Expectations; expec-
tation of a function of a random variable. Uniform, normal and exponential random variables.
Memoryless property of exponential distribution. Joint distributions: transformation of ran-
dom variables (including Jacobians), examples. Simulation: generating continuous random
variables, independent normal random variables. Geometrical probability: Bertrand’s para-
dox, Buffon’s needle. Correlation coefficient, bivariate normal random variables. [6]
Inequalities and limits: Markov’s inequality, Chebyshev’s inequality. Weak law of large
numbers. Convexity: Jensens inequality for general random variables, AM/GM inequality.
Moment generating functions and statement (no proof) of continuity theorem. Statement of
central limit theorem and sketch of proof. Examples, including sampling. [3]
vi
This is reproduced from the Faculty handbook.
Learning outcomes
From its origin in games of chance and the analysis of experimental data, probability
theory has developed into an area of mathematics with many varied applications in
physics, biology and business.
The course introduces the basic ideas of probability and should be accessible to stu-
dents who have no previous experience of probability or statistics. While developing the
underlying theory, the course should strengthen students’ general mathematical back-
ground and manipulative skills by its use of the axiomatic approach. There are links
with other courses, in particular Vectors and Matrices, the elementary combinatorics
of Numbers and Sets, the difference equations of Differential Equations and calculus
of Vector Calculus and Analysis. Students should be left with a sense of the power of
mathematics in relation to a variety of application areas. After a discussion of basic
concepts (including conditional probability, Bayes’ formula, the binomial and Poisson
distributions, and expectation), the course studies random walks, branching processes,
geometric probability, simulation, sampling and the central limit theorem. Random
walks can be used, for example, to represent the movement of a molecule of gas or the
fluctuations of a share price; branching processes have applications in the modelling of
chain reactions and epidemics. Through its treatment of discrete and continuous ran-
dom variables, the course lays the foundation for the later study of statistical inference.
By the end of this course, you should:
• understand the basic concepts of probability theory, including independence, con-
ditional probability, Bayes’ formula, expectation, variance and generating func-
tions;
• be familiar with the properties of commonly-used distribution functions for dis-
crete and continuous random variables;
• understand and be able to apply the central limit theorem.
• be able to apply the above theory to ‘real world’ problems, including random
walks and branching processes.
vii
1 Classical probability
Classical probability. Sample spaces. Equally likely outcomes. *Equalizations of heads
and tails*. *Arcsine law*.
1.1 Diverse notions of ‘probability’
Consider some uses of the word ‘probability’.
1. The probability that a fair coin will land heads is 1/2.
2. The probability that a selection of 6 numbers wins the National Lottery Lotto
jackpot is 1 in 49
6

=13,983,816, or 7.15112 × 10−8
.
3. The probability that a drawing pin will land ‘point up’ is 0.62.
4. The probability that a large earthquake will occur on the San Andreas Fault in
the next 30 years is about 21%.
5. The probability that humanity will be extinct by 2100 is about 50%.
Clearly, these are quite different notions of probability (known as classical1,2
,
frequentist3
and subjective4,5
probability).
Probability theory is useful in the biological, physical, actuarial, management and com-
puter sciences, in economics, engineering, and operations research. It helps in modeling
complex systems and in decision-making when there is uncertainty. It can be used to
prove theorems in other mathematical fields (such as analysis, number theory, game
theory, graph theory, quantum theory and communications theory).
Mathematical probability began its development in Renaissance Europe when mathe-
maticians such as Pascal and Fermat started to take an interest in understanding games
of chance. Indeed, one can develop much of the subject simply by questioning what
happens in games that are played by tossing a fair coin. We begin with the classical
approach (lectures 1 and 2), and then shortly come to an axiomatic approach (lecture
4) which makes ‘probability’ a well-defined mathematical subject.
1.2 Classical probability
Classical probability applies in situations in which there are just a finite number of
equally likely possible outcomes. For example, tossing a fair coin or an unloaded die,
or picking a card from a standard well-shuffled pack.
1
Example 1.1 [Problem of points considered by Pascal, Fermat 1654]. Equally skilled
players A and B play a series of games. The winner of each game gets a point. The
winner is the first to reach 10 points. They are forced to stop early, when A has 8
points and B has 7 points, How should they divide the stake?
Consider the next 4 games. Exactly one player must reach 10 points. There are 16
equally likely outcomes:
A wins B wins
AAAA AAAB AABB ABBB BBBB
AABA ABBA BABB
ABAA ABAB BBAB
BAAA BABA BBBA
BAAB
BBAA
Player A wins if she wins 4, 3 or 2 of the next 4 games (and loses if she wins only 1
or 0 games). She can win 4, 3 or 2 games in 1, 4 and 6 ways, respectively. There are
16 (= 2 × 2 × 2 × 2) possible results for the next 4 games. So P(A wins) = 11/16. It
would seem fair that she should receive 11/16 of the stake.
1.3 Sample space and events
Let’s generalise the above example. Consider an experiment which has a random out-
come. The set of all possible outcomes is called the sample space. If the number of
possible outcomes is countable we might list them, as ω1, ω2, . . . , and then the sample
space is Ω = {ω1, ω2, . . .}. Choosing a particular point ω ∈ Ω provides an observation.
Remark. A sample space need not be countable. For example, an infinite sequence of
coin tosses like TTHTHHT. . . is in 1–1 relation to binary fractions like 0.0010110 . . .
and the number of these is uncountable.
Certain set theory notions have special meaning and terminology when used in the
context of probability.
1. A subset A of Ω is called an event.
2. For any events A, B ∈ Ω,
• The complement event Ac
= Ω  A is the event that A does not occur, or
‘not A’. This is also sometimes written as Ā or A0
.
• A ∪ B is ‘A or B’.
• A ∩ B is ‘A and B’.
• A ⊆ B: occurrence of A implies occurrence of B.
• A ∩ B = ∅ is ‘A and B are mutually exclusive or disjoint events’.
2
As already mentioned, in classical probability the sample space consists of a finite
number of equally likely outcomes, Ω = {ω1, . . . , ωN }. For A ⊆ Ω,
P(A) =
number of outcomes in A
number of outcomes in Ω
=
|A|
N
.
Thus (as Laplace put it) P(A) is the quotient of ‘number of favourable outcomes’ (when
A occurs) divided by ‘number of possible outcomes’.
Example 1.2. Suppose r digits are chosen from a table of random numbers. Find the
probability that, for 0 ≤ k ≤ 9, (i) no digit exceeds k, and (ii) k is the greatest digit
drawn.
Take
Ω = {(a1, . . . , ar) : 0 ≤ ai ≤ 9, i = 1, . . . , r}.
Let Ak = [no digit exceeds k], or as a subset of Ω
Ak = {(a1, . . . , ar) : 0 ≤ ai ≤ k, i = 1, . . . , r}.
Thus |Ω| = 10r
and |Ak| = (k + 1)r
. So (i) P(Ak) = (k + 1)r
/10r
.
Ak−1
Ak
Ω
(ii) The event that k is the greatest digit drawn is Bk = Ak  Ak−1. So |Bk| =
|Ak| − |Ak−1| and
P(Bk) =
(k + 1)r
− kr
10r
.
1.4 Equalizations in random walk
In later lectures we will study random walks. Many interesting questions can be asked
about the random path produced by tosses of a fair coin (+1 for a head, −1 for a tail).
By the following example I hope to convince you that probability theory contains
beautiful and surprising results.
3
What is the probability that after
an odd number of steps the walk is
on the positive side of the x-axis?
(Answer: obviously 1/2.)
How many times on average does a
walk of length n cross the x-axis?
When does the first (or last) cross-
ing of the x-axis typically occur?
What is the distribution of termi-
nal point? The walk at the left re-
turned to the x-axis after 100 steps.
How likely is this?
Example 1.3. Suppose we toss a fair coin 2n times. We say that an equalization occurs
at the (2k)th toss if there have been k heads and k tails. Let un be the probability
that equalization occurs at the (2n)th toss (so there have been n heads and n tails).
Here are two rare things that might happen when we toss a coin 2n times.
• No equalization ever takes place (except at the start).
• An equalization takes place at the end (exactly n heads and n tails).
Which do you think is more likely?
Let αk = P(there is no equalization after any of 2, 4, . . . , 2k tosses).
Let uk = P(there is equalization after 2k tosses).
The pictures at the right show
results of tossing a fair coin 4
times, when the first toss is a
head.
Notice that of these 8 equally
likely outcomes there are 3
that have no equalization ex-
cept at the start, and 3 that
have an equalization at the
end.
So α2 = u2 = 3/8 =
1
24

4
2

.
We will prove that αn = un.
4
Proof. We count the paths from the origin to a point T above the axis that do not have
any equalization (except at the start). Suppose the first step is to a = (1, 1). Now we
must count all paths from a to T, minus those that go from a to T but at some point
make an equalization, such as the path shown in black below:
a
a′
T
T′
(2n, 0)
(0, 0)
But notice that every such path that has an equalization is in 1–1 correspondence with
a path from a0
= (1, −1) to T. This is the path obtained by reflecting around the axis
the part of the path that takes place before the first equalization.
The number of paths from a0
to T = (2n, k) equals the number from a to T0
= (2n, k+2).
So the number of paths from a to some T  0 that have no equalization is
X
k=2,4,...,2n

#[a → (2n, k)] − #[a → (2n, k + 2)]

= #[a → (2n, 2)] = #[a → (2n, 0)].
We want twice this number (since the first step might have been to a0
), which gives
#[(0, 0) → (2n, 0)] = 2n
n

. So as claimed
αn = un =
1
22n

2n
n

.
Arcsine law. The probability that the last equalization occurs at 2k is therefore
ukαn−k (since we must equalize at 2k and then not equalize at any of the 2n − 2k
subsequent steps). But we have just proved that ukαn−k = ukun−k. Notice that
therefore the last equalization occurs at 2n − 2k with the same probability.
We will see in Lecture 3 that uk is approximately 1/
√
πk, so the last equalization is at
time 2k with probability proportional to 1/
p
k(n − k).
The probability that the last equalization occurs before the 2kth toss is approximately
Z 2k
2n
0
1
π
1
p
x(1 − x)
dx = (2/π) sin−1
p
k/n.
For instance, (2/π) sin−1
√
0.15 = 0.2532. So the probability that the last equalization
occurs during either the first or last 15% of the 2n coin tosses is about 0.5064 ( 1/2).
This is a nontrivial result that would be hard to have guessed!
5
2 Combinatorial analysis
Combinatorial analysis. Fundamental rules. Sampling with and without replacement,
with and without regard to ordering. Permutations and combinations. Birthday prob-
lem. Binomial coefficient.
2.1 Counting
Example 2.1. A menu with 6 starters, 7 mains and 6 desserts has 6 × 7 × 6 = 252
meal choices.
Fundamental rule of counting: Suppose r multiple choices are to be made in se-
quence: there are m1 possibilities for the first choice; then m2 possibilities for the
second choice; then m3 possibilities for the third choice, and so on until after making
the first r − 1 choices there are mr possibilities for the rth choice. Then the total
number of different possibilities for the set of choices is
m1 × m2 × · · · × mr.
Example 2.2. How many ways can the integers 1, 2, . . . , n be ordered?
The first integer can be chosen in n ways, then the second in n − 1 ways, etc., giving
n! = n(n − 1) · · · 1 ways (‘factorial n’).
2.2 Sampling with or without replacement
Many standard calculations arising in classical probability involve counting numbers of
equally likely outcomes. This can be tricky!
Often such counts can be viewed as counting the number of lists of length n that can
be constructed from a set of x items X = {1, . . . , x}.
Let N = {1, . . . , n} be the set of list positions. Consider the function f : N → X. This
gives the ordered list (f(1), f(2), . . . , f(n)). We might construct this list by drawing a
sample of size n from the elements of X. We start by drawing an item for list position
1, then an item for list position 2, etc.
List
Items
Position
1
1
2
2 2 2
2
3
3 n
n − 1
x
x
4
4
6
1. Sampling with replacement. After choosing an item we put it back so it can
be chosen again. E.g. list (2, 4, 2, . . . , x, 2) is possible, as shown above.
2. Sampling without replacement. After choosing an item we set it aside. We
end up with an ordered list of n distinct items (requires x ≥ n).
3. Sampling with replacement, but requiring each item is chosen at least once (re-
quires n ≥ x).
These three cases correspond to ‘any f’, ‘injective f’ and ‘surjective f’, respectively
Example 2.3. Suppose N = {a, b, c}, X = {p, q, r, s}. How many different injective
functions are there mapping N to X?
Solution: Choosing the values of f(a), f(b), f(c) in sequence without replacement, we
find the number of different injective f : N → X is 4 × 3 × 2 = 24.
Example 2.4. I have n keys in my pocket. I select one at random and try it in a lock.
If it fails I replace it and try again (sampling with replacement).
P(success at rth trial) =
(n − 1)r−1
× 1
nr
.
If keys are not replaced (sampling without replacement)
P(success at rth trial) =
(n − 1)!
n!
=
1
n
,
or alternatively
=
n − 1
n
×
n − 2
n − 1
× · · · ×
n − r + 1
n − r + 2
×
1
n − r + 1
=
1
n
.
Example 2.5 [Birthday problem]. How many people are needed in a room for it to be
a favourable bet (probability of success greater than 1/2) that two people in the room
will have the same birthday?
Since there are 365 possible birthdays, it is tempting to guess that we would need about
1/2 this number, or 183. In fact, the number required for a favourable bet is only 23.
To see this, we find the probability that, in a room with r people, there is no duplication
of birthdays; the bet is favourable if this probability is less than one half.
Let f(r) be probability that amongst r people there is a match. Then
P(no match) = 1 − f(r) =
364
365
·
363
365
·
362
365
· · · ·
366 − r
365
.
So f(22) = 0.475695 and f(23) = 0.507297. Also f(47) = 0.955.
Notice that with 23 people there are 23
2

= 253 pairs and each pair has a probability
1/365 of sharing a birthday.
7
Remarks. A slightly different question is this: interrogating an audience one by one,
how long will it take on average until we find a first birthday match? Answer: 23.62
(with standard deviation of 12.91).
The probability of finding a triple with the same birthday exceeds 0.5 for n ≥ 88. (How
do you think I computed that answer?)
2.3 Sampling with or without regard to ordering
When counting numbers of possible f : N → X, we might decide that the labels that
are given to elements of N and X do or do not matter.
So having constructed the set of possible lists (f(1), . . . , f(n)) we might
(i) leave lists alone (order matters);
(ii) sort them ascending: so (2,5,4) and (4,2,5) both become (2,4,5).
(labels of the positions in the list do not matter.)
(iii) renumber each item in the list by the number of the draw on which it was first
seen: so (2,5,2) and (5,4,5) both become (1,2,1).
(labels of the items do not matter.)
(iv) do both (ii) then (iii), so (2,5,2) and (8,5,5) both become (1,1,2).
(no labels matter.)
For example, in case (ii) we are saying that (g(1), . . . , g(n)) is the same as
(f(1), . . . , f(n)) if there is permutation of π of 1, . . . , n, such that g(i) = f(π(i)).
2.4 Four cases of enumerative combinatorics
Combinations of 1,2,3 (top of page 7) and (i)–(iv) above produce a ‘twelvefold way
of enumerative combinatorics’, but involve the partition function and Bell numbers.
Let’s consider just the four possibilities obtained from combinations of 1,2 and (i),(ii).
1(i) Sampling with replacement and with ordering. Each location in the list
can be filled in x ways, so this can be done in xn
ways.
2(i) Sampling without replacement and with ordering. Applying the funda-
mental rule, this can be done in x(n) = x(x − 1) · · · (x − n + 1) ways. Another
notation for this falling sequential product is xn
(read as ‘x to the n falling’).
In the special case n = x this is x! (the number of permutations of 1, 2, . . . , x).
2(ii) Sampling without replacement and without ordering. Now we care only
which items are selected. (The positions in the list are indistinguishable.) This
can be done in x(n)/n! = x
n

ways, i.e. the answer above divided by n!.
8
This is of course the binomial coefficient, equal to the number of distinguishable
sets of n items that can be chosen from a set of x items.
Recall that x
n

is the coefficient of tn
in (1 + t)x
.
(1 + t)(1 + t) · · · (1 + t)
| {z }
x times
=
x
X
n=0

x
n

tn
.
1(ii) Sampling with replacement and without ordering. Now we care only how
many times each item is selected. (The list positions are indistinguishable; we
care only how many items of each type are selected.) The number of distinct f
is the number of nonnegative integer solutions to
n1 + n2 + · · · + nx = n.
Consider n = 7 and x = 5. Think of marking off 5 bins with 4 dividers: |, and
then placing 7 ∗s. One outcome is
∗ ∗ ∗
|{z}
n1
| ∗
|{z}
n2
| |{z}
n3
| ∗ ∗ ∗
|{z}
n4
| |{z}
n5
which corresponds to n1 = 3, n2 = 1, n3 = 0, n4 = 3, n5 = 0.
In general, there are x + n − 1 symbols and we are choosing n of them to be ∗.
So the number of possibilities is x+n−1
n

.
Above we have attempted a systematic description of different types of counting prob-
lem. However it is often best to just think from scratch, using the fundamental rule.
Example 2.6. How may ways can k different flags be flown on m flag poles in a row
if ≥ 2 flags may be on the same pole, and order from the top to bottom is important?
There are m choices for the first flag, then m+1 for the second. Each flag added creates
one more distinct place that the next flag might be added. So
m(m + 1) · · · (m + k − 1) =
(m + k − 1)!
(m − 1)!
.
Remark. Suppose we have a diamond, an emerald, and a ruby. How many ways can
we store these gems in identical small velvet bags? This is case of 1(iii). Think gems ≡
list positions; bags ≡ items. Take each gem, in sequence, and choose a bag to receive it.
There are 5 ways: (1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (1, 2, 3). The 1,2,3 are the first,
second and third bag to receive a gem. Here we have B(3) = 5 (the Bell numbers).
9
3 Stirling’s formula
Multinomial coefficient. Stirling’s formula *and proof*. Examples of application. *Im-
proved Stirling’s formula*.
3.1 Multinomial coefficient
Suppose we fill successive locations in a list of length n by sampling with replacement
from {1, . . . , x}. How may ways can this be done so that the numbers of times that
each of 1, . . . , x appears in the list is n1, . . . , nx, respectively where
P
i ni = n?
To compute this: we choose the n1 places in which ‘1’ appears in n
n1

ways, then choose
the n2 places in which ‘2’ appears in n−n1
n2

ways, etc.
The answer is the multinomial coefficient

n
n1, . . . , nx

:=

n
n1

n − n1
n2

n − n1 − n2
n3

· · ·

n − n1 − · · · − nr−1
nx

=
n!
n1!n2! · · · nx!
,
with the convention 0! = 1.
Fact:
(y1 + · · · + yx)n
=
X 
n
n1, . . . , nx

yn1
1 · · · ynx
x ,
where the sum is over all n1, . . . , nx such that n1 +· · ·+nx = n. [Remark. The number
of terms in this sum is n+x−1
x−1

, as found in §2.4, 1(ii).]
Example 3.1. How may ways can a pack of 52 cards be dealt into bridge hands of 13
cards for each of 4 (distinguishable) players?
Answer:

52
13, 13, 13, 13

=

52
13

39
13

26
13

=
52!
(13!)4
.
This is 53644737765488792839237440000= 5.36447 × 1028
. This is
(4n)!
n!4
evaluated at
n = 13. How might we estimate it for greater n?
Answer:
(4n)!
n!4
≈
28n
√
2(πn)3/2
(= 5.49496 × 1028
when n = 13).
Interesting facts:
(i) If we include situations that might occur part way through a bridge game then there
are 2.05 × 1033
possible ‘positions’.
10
(ii) The ‘Shannon number’ is the number of possible board positions in chess. It is
roughly 1043
(according to Claude Shannon, 1950).
The age of the universe is thought to be about 4 × 1017
seconds.
3.2 Stirling’s formula
Theorem 3.2 (Stirling’s formula). As n → ∞,
log

n!en
nn+1/2

= log(
√
2π) + O(1/n).
The most common statement of Stirling’s formula is given as a corollary.
Corollary 3.3. As n → ∞, n! ∼
√
2πnn+
1
2 e−n
.
In this context, ∼ indicates that the ratio of the two sides tends to 1.
This is good even for small n. It is always a slight underestimate.
n n! Approximation Ratio
1 1 .922 1.084
2 2 1.919 1.042
3 6 5.836 1.028
4 24 23.506 1.021
5 120 118.019 1.016
6 720 710.078 1.013
7 5040 4980.396 1.011
8 40320 39902.395 1.010
9 362880 359536.873 1.009
10 3628800 3598696.619 1.008
Notice that from the Taylor expansion of en
= 1 + n + · · · + nn
/n! + · · · we have
1 ≤ nn
/n! ≤ en
.
We first prove the weak form of Stirling’s formula, namely that log(n!) ∼ n log n.
Proof. (examinable) log n! =
Pn
1 log k. Now
Z n
1
log x dx ≤
n
X
1
log k ≤
Z n+1
1
log x dx,
and
R z
1
log x dx = z log z − z + 1, and so
n log n − n + 1 ≤ log n! ≤ (n + 1) log(n + 1) − n.
Divide by n log n and let n → ∞ to sandwich log n!
n log n between terms that tend to 1.
Therefore log n! ∼ n log n.
11
Now we prove the strong form.
Proof. (not examinable) Some steps in this proof are like ‘pulling-a-rabbit-out-of-a-hat’.
Let
dn = log

n!en
nn+1/2

= log n! − n + 1
2

log(n) + n.
Then with t = 1
2n+1 ,
dn − dn+1 = n + 1
2

log

n + 1
n

− 1 =
1
2t
log

1 + t
1 − t

− 1.
Now for 0  t  1, if we subtract the second of the following expressions from the first:
log(1 + t) − t = −1
2 t2
+ 1
3 t3
− 1
4 t4
+ · · ·
log(1 − t) + t = −1
2 t2
− 1
3 t3
− 1
4 t4
+ · · ·
and divide by 2t, we get
dn − dn+1 = 1
3 t2
+ 1
5 t4
+ 1
7 t6
+ · · ·
≤ 1
3 t2
+ 1
3 t4
+ 1
3 t6
+ · · ·
= 1
3
t2
1 − t2
= 1
3
1
(2n + 1)2 − 1
= 1
12

1
n
−
1
n + 1

.
This shows that dn is decreasing and d1 − dn  1
12 (1 − 1
n ). So we may conclude
dn  d1 − 1
12 = 11
12 . By convergence of monotone bounded sequences, dn tends to a
limit, say dn → A.
For m  n, dn − dm  − 2
15 ( 1
2n+1 )4
+ 1
12 ( 1
n − 1
m ), so we also have A  dn  A + 1
12n .
It only remains to find A.
Defining In, and then using integration by parts we have
In :=
Z π/2
0
sinn
θ dθ = − cos θ sinn−1
θ
Notes on probability 2
Notes on probability 2
π/2
0
+
Z π/2
0
(n − 1) cos2
θ sinn−2
θ dθ
= (n − 1)(In−2 − In).
So In = n−1
n In−2, with I0 = π/2 and I1 = 1. Therefore
I2n =
1
2
·
3
4
· · ·
2n − 1
2n
·
π
2
=
(2n)!
(2nn!)2
π
2
I2n+1 =
2
3
·
4
5
· · ·
2n
2n + 1
=
(2n
n!)2
(2n + 1)!
.
12
For θ ∈ (0, π/2), sinn
θ is decreasing in n, so In is also decreasing in n. Thus
1 ≤
I2n
I2n+1
≤
I2n−1
I2n+1
= 1 +
1
2n
→ 1.
By using n! ∼ nn+1/2
e−n+A
to evaluate the term in square brackets below,
I2n
I2n+1
= π(2n + 1)

((2n)!)2
24n+1(n!)4

∼ π(2n + 1)
1
ne2A
→
2π
e2A
,
which is to equal 1. Therefore A = log(
√
2π) as required.
Notice we have actually shown that
n! =

√
2πnn+
1
2 e−n

e(n)
= S(n)e(n)
where 0  (n)  1
12n . For example, 10!/S(10) = 1.008365 and e1/120
= 1.008368.
Example 3.4. Suppose we toss a fair coin 2n times. The probability of equal number
of heads and tails is
2n
n

22n
=
(2n)!
[2n(n!)]2
. ≈
√
2π(2n)2n+
1
2 e−2n
h
2n
√
2πnn+
1
2 e−n
i2 =
1
√
πn
.
For n = 13 this is 0.156478. The exact answer is 0.154981.
Compare this to the probability of extracting 26 cards from a shuffled deck and obtain-
ing 13 red and 13 black. That is
26
13
 26
13

52
26
 = 0.2181.
Do you understand why this probability is greater?
3.3 Improved Stirling’s formula
In fact, (see Robbins, A Remark on Stirling’s Formula, 1955), we have
√
2πnn+
1
2 e−n+ 1
12n+1  n! 
√
2πnn+
1
2 e−n+ 1
12n .
We have already proved the right hand part, dn  A + 1
12n . The left hand part of this
follows from
dn − dn+1  1
3 t2
+ 1
32 t4
+ 1
33 t6
+ · · · =
t2
3 − t2
≥ 1
12

1
n + 1
12
−
1
n + 1 + 1
12

,
where one can check the final inequality using Mathematica. It implies dn −A  1
12n+1 .
For n = 10, 1.008300  n!/S(n)  1.008368.
13
4 Axiomatic approach
Probability axioms. Properties of P. Boole’s inequality. Probabilistic method in
combinatorics. Inclusion-exclusion formula. Coincidence of derangements.
4.1 Axioms of probability
A probability space is a triple (Ω, F, P), in which Ω is the sample space, F is a
collection of subsets of Ω, and P is a probability measure P : F → [0, 1].
To obtain a consistent theory we must place requirements on F:
F1: ∅ ∈ F and Ω ∈ F.
F2: A ∈ F =⇒ Ac
∈ F.
F3: A1, A2, . . . ∈ F =⇒
S∞
i=1 Ai ∈ F.
Each A ∈ F is a possible event. If Ω is finite then we can take F to be the set of all
subsets of Ω. But sometimes we need to be more careful in choosing F, such as when
Ω is the set of all real numbers.
We also place requirements on P: it is to be a real-valued function defined on F which
satisfies three axioms (known as the Kolmogorov axioms):
I. 0 ≤ P(A) ≤ 1, for all A ∈ F.
II. P(Ω) = 1.
III. For any countable set of events, A1, A2, . . . , which are disjoint (i.e. Ai ∩ Aj = ∅,
i 6= j), we have
P (
S
i Ai) =
P
i P(Ai).
P(A) is called the probability of the event A.
Note. The event ‘two heads’ is typically written as {two heads} or [two heads].
One sees written P{two heads}, P({two heads}), P(two heads), and P for P.
Example 4.1. Consider an arbitrary countable set Ω = {ω1, ω2, · · · } and an arbitrary
collection (pi, p2, . . .) of nonnegative numbers with sum p1 + p2 + · · · = 1. Put
P(A) =
X
i:ωi∈A
pi.
Then P satisfies the axioms.
The numbers (p1, p2, . . .) are called a probability distribution..
14
Remark. As mentioned above, if Ω is not finite then it may not be possible to let
F be all subsets of Ω. For example, it can be shown that it is impossible to define
a P for all possible subsets of the interval [0, 1] that will satisfy the axioms. Instead
we define P for special subsets, namely the intervals [a, b], with the natural choice of
P([a, b]) = b − a. We then use F1, F2, F3 to construct F as the collection of sets that
can be formed from countable unions and intersections of such intervals, and deduce
their probabilities from the axioms.
Theorem 4.2 (Properties of P). Axioms I–III imply the following further properties:
(i) P(∅) = 0. (probability of the empty set)
(ii) P(Ac
) = 1 − P(A).
(iii) If A ⊆ B then P(A) ≤ P(B). (monotonicity)
(iv) P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
(v) If A1 ⊆ A2 ⊆ A3 ⊆ · · · then
P (
S∞
i=1 Ai) = lim
n→∞
P(An).
Property (v) says that P(·) is a continuous function.
Proof. From II and III: P(Ω) = P(A ∪ Ac
) = P(A) + P(Ac
) = 1.
This gives (ii). Setting A = Ω gives (i).
For (iii) let B = A ∪ (B ∩ Ac
) so P(B) = P(A) + P(B ∩ Ac
) ≥ P(A).
For (iv) use P(A ∪ B) = P(A) + P(B ∩ Ac
) and P(B) = P(A ∩ B) + P(B ∩ Ac
).
Proof of (v) is deferred to §7.1.
Remark. As a consequence of Theorem 4.2 (iv) we say that P is a subadditive set
function, as it is one for which
P(A ∪ B) ≤ P(A) + P(B),
for all A, B. It is a also a submodular function, since
P(A ∪ B) + P(A ∩ B) ≤ P(A) + P(B),
for all A, B. It is also a supermodular function, since the reverse inequality is true.
4.2 Boole’s inequality
Theorem 4.3 (Boole’s inequality). For any A1, A2, . . . ,
P
∞
[
i=1
Ai
!
≤
∞
X
i=1
P(Ai)

special case is P
n
[
i=1
Ai
!
≤
n
X
i=1
P(Ai)
#
.
15
Proof. Let B1 = A1 and Bi = Ai 
Si−1
k=1 Ak.
Then B1, B2, . . . are disjoint and
S
k Ak =
S
k Bk. As Bi ⊆ Ai,
P(
S
i Ai) = P(
S
i Bi) =
P
i P(Bi) ≤
P
i P(Ai).
Example 4.4. Consider a sequence of tosses of biased coins. Let Ak be the event that
the kth toss is a head. Suppose P(Ak) = pk. The probability that an infinite number
of heads occurs is
P
∞

i=1
∞
[
k=i
Ak
!
≤ P
∞
[
k=i
Ak
!
≤ pi + pi+1 + · · · (by Boole’s inequality).
Hence if
P∞
i=1 pi  ∞ the right hand side can be made arbitrarily close to 0.
This proves that the probability of seeing an infinite number of heads is 0.
The reverse is also true: if
P∞
i=1 pi = ∞ then P(number of heads is infinite) = 1.
Example 4.5. The following result is due to Erdős (1947) and is an example of the
so-called probabilistic method in combinatorics.
Consider the complete graph on n vertices. Suppose for an integer k,

n
k

21−(k
2)  1.
Then it is possible to color the edges red and blue so that no subgraph of k vertices
has edges of just one colour. E.g. n = 200, k = 12.
Proof. Colour the edges at random, each as red or blue. In any subgraph of k vertices
the probability that every edge is red is 2−(k
2). There are n
k

subgraphs of k vertices.
Let Ai be the event that the ith such subgraph has monochrome edges.
P (
S
i Ai) ≤
P
i P(Ai) = n
k

· 2 · 2−(k
2)  1.
So there must be at least one way of colouring the edges so that no subgraph of k
vertices has only monochrome edges.
Note. If n, k satisfy the above inequality then n + 1 is a lower bound on the answer to
the ‘Party problem’, i.e. what is the minimum number of guests needed to guarantee
there will be either k who all know one another, or k who are all strangers to one
another? The answer is the Ramsey number, R(k, k). E.g. R(3, 3) = 6, R(4, 4) = 18.
16
4.3 Inclusion-exclusion formula
Theorem 4.6 (Inclusion-exclusion). For any events A1, . . . , An,
P (
Sn
i=1 Ai) =
n
X
i=1
P(Ai) −
n
X
i1i2
P(Ai1
∩ Ai2
) +
n
X
i1i2i3
P(Ai1
∩ Ai2
∩ Ai3
)
− · · · + (−1)n−1
P(A1 ∩ · · · ∩ An).
(4.1)
Proof. The proof is by induction. It is clearly true for n = 2. Now use
P(A1 ∪ · · · ∪ An) = P(A1) + P(A2 ∪ · · · ∪ An) − P(
Sn
i=2(A1 ∩ Ai))
and then apply the inductive hypothesis for n − 1.
Example 4.7 [Probability of derangement]. Two packs of cards are shuffled and placed
on the table. One by one, two cards are simultaneously turned over from the top of the
packs. What is the probability that at some point the two revealed cards are identical?
This is a question about random permutations. A permutation of 1, . . . , n is called a
derangement if no integer appears in its natural position.
Suppose one of the n! permutations is picked at random. Let Ak be the event that k is
in its natural position. By the inclusion-exclusion formula
P (
S
k Ak) =
X
k
P(Ak) −
X
k1k2
P(Ak1
∩ Ak2
) + · · · + (−1)n−1
P(A1 ∩ · · · ∩ An)
= n
1
n
−

n
2

1
n
1
n − 1
+

n
3

1
n
1
n − 1
1
n − 2
− · · · + (−1)n−1 1
n!
= 1 −
1
2!
+
1
3!
− · · · + (−1)n−1 1
n!
≈ 1 − e−1
.
So the probability of at least one match is about 0.632. The probability that a randomly
chosen permutation is a derangement is P(
T
k Ac
k) ≈ e−1
= 0.368.
Example 4.8. The formula can also be used to answer a question like “what is the
number of surjections from a set of A of n elements to a set B of m ≤ n elements?”
Answer. Let Ai be the set of those functions that do not have i ∈ B in their image.
The number of functions that miss out any given set of k elements of B is (m − k)n
.
Hence the number of surjections is
Sn,m = mn
−
Notes on probability 2
Notes on probability 2
[
i
Ai
Notes on probability 2
Notes on probability 2
= mn
−
m−1
X
k=1
(−1)k−1

m
k

(m − k)n
.
17
5 Independence
Bonferroni inequalities. Independence. Important discrete distributions (Bernoulli,
binomial, Poisson, geometric and hypergeometic). Poisson approximation to binomial.
5.1 Bonferroni’s inequalities
Notation. We sometimes write P(A, B, C) to mean the same as P(A ∩ B ∩ C).
Bonferroni’s inequalities say that if we truncate the sum on the right hand side of
the inclusion-exclusion formula (4.1) so as to end with a positive (negative) term then
we have an over- (under-) estimate of P(
S
i Ai). For example,
P(A1 ∪ A2 ∪ A3) ≥ P(A1) + P(A2) + P(A3) − P(A1A2) − P(A2A3) − P(A1A3).
Corollary 5.1 (Bonferroni’s inequalities). For any events A1, . . . , An, and for any r,
1 ≤ r ≤ n,
P
n
[
i=1
Ai
! ≤
or
≥
n
X
i=1
P(Ai) −
n
X
i1i2
P(Ai1
∩ Ai2
) +
n
X
i1i2i3
P(Ai1
∩ Ai2
∩ Ai3
)
− · · · + (−1)r−1
X
i1···ir
P(Ai1
∩ · · · ∩ Air
)
as r is odd or even.
Proof. Again, we use induction on n. For n = 2 we have P(A1 ∪A2) ≤ P(A1)+P(A2).
You should be able to complete the proof using the fact that
P A1 ∪
n
[
i=2
Ai
!
= P(A1) + P
n
[
i=2
Ai
!
− P
n
[
i=2
(Ai ∩ A1)
!
.
Example 5.2. Consider Ω = {1, . . . , m}, with all m outcomes equally likely. Suppose
xk ∈ {1, . . . , m} and let Ak = {1, 2, . . . , xk}. So P(Ak) = xk/m and P(Aj ∩ Ak) =
min{xj, xk}/m. By applying Bonferroni inequalities we can prove results like
max{x1, . . . , xn} ≥
X
i
xi −
X
ij
min{xi, xj},
max{x1, . . . , xn} ≤
X
i
xi −
X
ij
min{xi, xj} +
X
ijk
min{xi, xj, xk}.
18
5.2 Independence of two events
Two events A and B are said to be independent if
P(A ∩ B) = P(A)P(B).
Otherwise they are said to be dependent.
Notice that if A and B are independent then
P(A∩Bc
) = P(A)−P(A∩B) = P(A)−P(A)P(B) = P(A)(1−P(B)) = P(A)P(Bc
),
so A and Bc
are independent. Reapplying this result we see also that Ac
and Bc
are
independent, and that Ac
and B are independent.
Example 5.3. Two fair dice are thrown. Let A1 (A2) be the event that the first
(second) die shows an odd number. Let A3 be the event that the sum of the two
numbers is odd. Are A1 and A2 independent? Are A1 and A3 independent?
Solution. We first calculate the probabilities of various events.
Event Probability
A1
18
36 = 1
2
A2 as above, 1
2
A3
6×3
36 = 1
2
Event Probability
A1 ∩ A2
3×3
36 = 1
4
A1 ∩ A3
3×3
36 = 1
4
A1 ∩ A2 ∩ A3 0
Thus by a series of multiplications, we can see that A1 and A2 are independent, A1
and A3 are independent (also A2 and A3).
Independent experiments. The idea of 2 independent events models that of ‘2
independent experiments’. Consider Ω1 = {α1, . . . } and Ω2 = {β1, . . . } with associated
probability distributions {p1, . . . } and {q1, . . . }. Then, by ‘2 independent experiments’,
we mean the sample space Ω1 × Ω2 with probability distribution P ((αi, βj)) = piqj.
Now, suppose A ⊂ Ω1 and B ⊂ Ω2. The event A can be interpreted as an event in
Ω1 × Ω2, namely A × Ω2, and similarly for B. Then
P (A ∩ B) =
X
αi∈A
βj ∈B
piqj =
X
αi∈A
pi
X
βj ∈B
qj = P (A) P (B) ,
which is why they are called ‘independent’ experiments. The obvious generalisation
to n experiments can be made, but for an infinite sequence of experiments we mean a
sample space Ω1 × Ω2 × . . . satisfying the appropriate formula for all n ∈ N.
19
5.3 Independence of multiple events
Events A1, A2, . . . are said to be independent (or if we wish to emphasise ‘mutually
independent’) if for all i1  i2  · · ·  ir,
P(Ai1
∩ Ai2
∩ · · · ∩ Air
) = P(Ai1
)P(Ai2
) · · · P(Air
).
Events can be pairwise independent without being (mutually) independent.
In Example 5.3, P(A1) = P(A2) = P(A3) = 1/2 but P(A1 ∩ A2 ∩ A3) = 0. So A1, A2
and A3 are not independent. Here is another such example:
Example 5.4. Roll three dice. Let Aij be the event that dice i and j show the
same. P(A12 ∩ A13) = 1/36 = P(A12)P(A13). But P(A12 ∩ A13 ∩ A23) = 1/36 6=
P(A12)P(A13)P(A23).
5.4 Important distributions
As in Example 4.1, consider a sample space Ω = {ω1, ω2, . . . } (which may be finite or
countable). For each ωi ∈ Ω let pi = P({ωi}). Then
pi ≥ 0, for all i, and
X
i
pi = 1. (5.1)
A sequence {pi}i=1,2,... satisfying (5.1) is called a probability distribution..
Example 5.5. Consider tossing a coin once, with possible outcomes Ω = {H, T}. For
p ∈ [0, 1], the Bernoulli distribution, denoted B(1, p), is
P(H) = p, P(T) = 1 − p.
Example 5.6. By tossing the above coin n times we obtain a sequence of Bernoulli
trials. The number of heads obtained is an outcome in the set Ω = {0, 1, 2, . . . , n}.
The probability of HHT · · · T is ppq · · · q. There are n
k

ways in which k heads occur,
each with probability pk
qn−k
. So
P(k heads) = pk =

n
k

pk
(1 − p)n−k
, k = 0, 1, . . . , n.
This is the binomial distribution, denoted B(n, p).
Example 5.7. Suppose n balls are tossed independently into k boxes such that the
probability that a given ball goes in box i is pi. The probability that there will be
n1, . . . , nk balls in boxes 1, . . . , k, respectively, is
n!
n1!n2! · · · nk!
pn1
1 · · · pnk
k , n1 + · · · + nk = n.
This is the multinomial distribution.
20
Example 5.8. Consider again an infinite sequence of Bernoulli trials, with
P(success) = 1 − P(failure) = p. The probability that the first success occurs after
exactly k failures is pk = p(1 − p)k
, k = 0, 1, . . .. This is the geometric distribution
with parameter p. Since
P∞
0 pr = 1, the probability that every trial is a failure is zero.
[You may sometimes see ‘geometric distribution’ used to mean the distribution of the
trial on which the first success occurs. Then pk = p(1 − p)k−1
, k = 1, 2, . . . .]
The geometric distribution has the memoryless property (but we leave discussion of
this until we meet the exponential distribution in §16.3).
Example 5.9. Consider an urn with n1 red balls and n2 black balls. Suppose n balls
are drawn without replacement, n ≤ n1 + n2. The probability of drawing exactly k red
balls is given by the hypergeometric distribution
pk =
n1
k
 n2
n−k

n1+n2
n
 , max(0, n − n2) ≤ k ≤ min(n, n1).
5.5 Poisson approximation to the binomial
Example 5.10. The Poisson distribution is often used to model the number of
occurrences of some event in a specified time, such as the number of insurance claims
suffered by an insurance company in a year. Denoted P(λ), the Poisson distribution
with parameter λ  0 is
pk =
λk
k!
e−λ
, k = 0, 1, . . . .
Theorem 5.11 (Poisson approximation to the binomial). Suppose that n → ∞ and
p → 0 such that np → λ. Then

n
k

pk
(1 − p)n−k
→
λk
k!
e−λ
, k = 0, 1, . . . .
Proof. Recall that (1 − a
n )n
→ e−a
as n → ∞. For convenience we write p rather than
p(n). The probability that exactly k events occur is
qk =

n
k

pk
(1 − p)n−k
=
1
k!
n(n − 1) · · · (n − k + 1)
nk
(np)k

1 −
np
n
n−k
→
1
k!
λk
e−λ
, k = 0, 1, . . .
since p = p(n) is such that as np(n) → λ.
Remark. Each of the distributions above is called a discrete distribution because
it is a probability distribution over an Ω which is finite or countable.
21
6 Conditional probability
Conditional probability, Law of total probability, Bayes’s formula. Screening test.
Simpson’s paradox.
6.1 Conditional probability
Suppose B is an event with P(B)  0. For any event A ⊆ Ω, the conditional
probability of A given B is
P(A | B) =
P(A ∩ B)
P(B)
,
i.e. the probability that A has occurred if we know that B has occurred. Note also that
P(A ∩ B) = P(A | B)P(B) = P(B | A)P(A).
If A and B are independent then
P(A | B) =
P(A ∩ B)
P(B)
=
P(A)P(B)
P(B)
= P(A).
Also P(A | Bc
) = P(A). So knowing whether or not B occurs does not affect probability
that A occurs.
Example 6.1. Notice that P(A | B)  P(A) ⇐⇒ P(B | A)  P(B). We might say
that A and B are ‘attractive’. The reason some card games are fun is because ‘good
hands attract’. In games like poker and bridge, ‘good hands’ tend to be those that have
more than usual homogeneity, like ‘4 aces’ or ‘a flush’ (5 cards of the same suit). If I
have a good hand, then the remainder of the cards are more homogeneous, and so it is
more likely that other players will also have good hands.
For example, in poker the probability of a royal flush is 1.539 × 10−6
. The probability
the player on my right has a royal flush, given that I have looked at my cards and seen
a royal flush is 1.959 × 10−6
, i.e. 1.27 times greater than before I looked at my cards.
6.2 Properties of conditional probability
Theorem 6.2.
1. P (A ∩ B) = P (A | B) P (B),
2. P (A ∩ B ∩ C) = P (A | B ∩ C) P (B | C) P (C),
3. P (A | B ∩ C) = P (A∩B|C)
P (B|C) ,
22
4. the function P (◦ | B) restricted to subsets of B is a probability function on B.
Proof. Results 1 to 3 are immediate from the definition of conditional probability.
For result 4, note that A ∩ B ⊂ B, so P (A ∩ B) ≤ P (B) and thus P (A | B) ≤ 1.
P (B | B) = 1 (obviously), so it just remains to show the Axiom III. For A1, A2, . . .
which are disjoint events and subsets of B, we have
P
[
i
Ai
Notes on probability 2
Notes on probability 2
Notes on probability 2
B
!
=
P(
S
i Ai ∩ B)
P(B)
=
P(
S
i Ai)
P(B)
=
P
i P(Ai)
P(B)
=
P
i P(Ai ∩ B)
P(B)
=
X
i
P(Ai | B).
6.3 Law of total probability
A (finite or countable) collection {Bi}i of disjoint events such that
S
i Bi = Ω is said
to be a partition of the sample space Ω. For any event A,
P(A) =
X
i
P(A ∩ Bi) =
X
i
P(A | Bi)P(Bi)
where the second summation extends only over Bi for which P(Bi)  0.
Example 6.3 [Gambler’s ruin]. A fair coin is tossed repeatedly. At each toss the
gambler wins £1 for heads and loses £1 for tails. He continues playing until he reaches
£a or goes broke.
Let px be the probability he goes broke before reaching a. Using the law of total
probability:
px = 1
2 px−1 + 1
2 px+1,
with p0 = 1, pa = 0. Solution is px = 1 − x/a.
6.4 Bayes’ formula
Theorem 6.4 (Bayes’ formula). Suppose {Bi}i is a partition of the sample space and
A is an event for which P(A)  0. Then for any event Bj in the partition for which
P(Bj)  0,
P(Bj | A) =
P(A | Bj)P(Bj)
P
i P(A | Bi)P(Bi)
where the summation in the denominator extends only over Bi for which P(Bi)  0.
23
Example 6.5 [Screening test]. A screening test is 98% effective in detecting a certain
disease when a person has the disease. However, the test yields a false positive rate of
1% of the healthy persons tested. If 0.1% of the population have the disease, what is
the probability that a person who tests positive has the disease?
P(+ | D) = 0.98, P(+ | Dc
) = 0.01, P(D) = 0.001.
P(D | +) =
P(+ | D)P(D)
P(+ | D)P(D) + P(+ | Dc)P(Dc)
=
0.98 × 0.001
0.98 × 0.001 + 0.01 × 0.999
≈ 0.09.
Thus of persons who test positive only about 9% have the disease.
Example 6.6 [Paradox of the two children].
(i) I have two children one of whom is a boy.
(ii) I have two children one of whom is a boy born on a Thursday.
Find in each case the probability that both are boys.
In case (i)
P(BB | BB ∪ BG) =
P(BB)
P(BB ∪ BG)
=
1
4
1
4 + 21
2
1
2
=
1
3
.
In case (ii), a child can be a girl (G), a boy born on Thursday (B∗
) or a boy not born
on a Thursday (B).
P(B∗
B∗
∪ BB∗
| B∗
B∗
∪ B∗
B ∪ B∗
G) =
P(B∗
B∗
∪ BB∗
)
P(B∗B∗ ∪ B∗B ∪ B∗G)
=
1
14
1
14 + 2 1
14
6
14
1
14
1
14 + 2 1
14
6
14 + 2 1
14
1
2
=
13
27
.
6.5 Simpson’s paradox
Example 6.7 [Simpson’s paradox]. One example of conditional probability that ap-
pears counter-intuitive when first encountered is the following situation. In practice,
it arises frequently. Consider one individual chosen at random from 50 men and 50
women applicants to a particular College. Figures on the 100 applicants are given in
the following table indicating whether they were educated at a state school or at an
independent school and whether they were admitted or rejected.
All applicants Admitted Rejected % Admitted
State 25 25 50%
Independent 28 22 56%
24
Note that overall the probability that an applicant is admitted is 0.53, but conditional
on the candidate being from an independent school the probability is 0.56 while condi-
tional on being from a state school the probability is lower at 0.50. Suppose that when
we break down the figures for men and women we have the following figures.
Men applicants Admitted Rejected % Admitted
State 15 22 41%
Independent 5 8 38%
Women applicants Admitted Rejected % Admitted
State 10 3 77%
Independent 23 14 62%
It may now be seen that now for both men and women the conditional probability of
being admitted is higher for state school applicants, at 0.41 and 0.77, respectively.
Simpson’s paradox is not really a paradox, since we can explain it. Here is a graphical
representation.
Scatterplot of correlation between two
continuous variables X and Y , grouped
by a nominal variable Z. Different col-
ors represent different levels of Z.
It can also be understood from the fact that
A
B

a
b
and
C
D

c
d
does not imply
A + C
B + D

a + c
b + d
.
E.g. {a, b, c, d, A, B, C, D} = {10, 10, 80, 10, 10, 5, 11, 1}.
Remark. It is appropriate for Cambridge students to know that this phenomenon
was actually first recorded by Udny Yule (a fellow of St John’s College) in 1903. It is
sometimes called the Yule-Simpson effect.
25
7 Discrete random variables
Probability is a continuous set function. Definition of a discrete random variable. Dis-
tributions. Expectation. Expectation of binomial and Poisson. Function of a random
variable. Properties of expectation.
7.1 Continuity of P
A sequence of events A1, A2, . . . is increasing (or decreasing) if
A1 ⊂ A2 ⊂ · · · (or A1 ⊃ A2 ⊃ · · · ).
We can define a limiting event
lim
n→∞
An =
∞
[
1
An or =
∞

1
An
!
.
Theorem 7.1. If A1, A2, . . . is an increasing or decreasing sequence of events then
lim
n→∞
P(An) = P( lim
n→∞
An).
Proof. Suppose A1, A2, . . . is an increasing sequence. Define Bn for n ≥ 1
B1 = A1
Bn = An 
n−1
[
i=1
Ai
!
= An ∩ Ac
n−1.
(Bn, n ≥ 1) are disjoint events and
∞
[
i=1
Ai =
∞
[
i=1
Bi,
n
[
i=1
Ai =
n
[
i=1
Bi
P
∞
[
i=1
Ai
!
= P
∞
[
i=1
Bi
!
=
∞
X
1
P (Bi) (axiom III)
= lim
n→∞
n
X
1
P (Bi)
= lim
n→∞
P
n
[
i=1
Ai
!
(axiom III)
= lim
n→∞
P (An)
26
Thus
P

lim
n→∞
An

= lim
n→∞
P (An) .
If A1, A2, . . . is a decreasing sequence then Ac
1, Ac
2, . . . is an increasing sequence. Hence
P

lim
n→∞
Ac
n

= lim
n→∞
P (Ac
n) .
Use limn→∞ Ac
n = (limn→∞ An)c
. Thus probability is a continuous set function.
7.2 Discrete random variables
A random variable (r.v.) X, taking values in a set ΩX, is a function X : Ω → ΩX.
Typically X(ω) is a real number, but it might be a member of a set, like ΩX = {H, T}.
A r.v. is said to be a discrete random variable if ΩX is finite or countable.
For any T ⊆ ΩX we let P(X ∈ T) = P({ω : X(ω) ∈ T}).
In particular, for each x ∈ ΩX, P (X = x) =
P
ω:X(ω)=x pω.
The distribution or probability mass function (p.m.f.) of the r.v. X is
(P (X = x) , x ∈ ΩX).
It is a probability distribution over ΩX. For example, if X is the number shown by the
roll of a fair die, its distribution is (P(X = i) = 1/6, i = 1, . . . , 6). We call this the
discrete uniform distribution over {1, . . . , 6}.
Rolling a die twice, so Ω = {(i, j), 1 ≤ i, j ≤ 6}, we might then define random variables
X and Y by X(i, j) = i + j and Y (i, j) = max{i, j}. Here ΩX = {i, 2 ≤ i ≤ 12}.
Remark. It can be useful to put X as a subscript on p, as a reminder of the variable
whose distribution this is; we write pX(x) = P(X = x). Also, we use the notation
X ∼ B(n, p), for example, to indicate that X has the B(n, p) distribution.
Remark. The terminology ‘random variable’ is somewhat inaccurate, since a random
variable is neither random nor a variable. The word ‘random’ is appropriate because the
domain of X is Ω, and we have a probability measure on subsets of Ω. Thereby we can
compute P(X ∈ T) = P({ω : X(ω) ∈ T}) for any T such that {ω : X(ω) ∈ T} ∈ F.
7.3 Expectation
The expectation (or mean) of a real-valued random variable X exists, and is equal
to the number
E [X] =
X
ω∈Ω
pwX(ω),
27
provided that this sum is absolutely convergent.
In practice it is calculated by summing over x ∈ ΩX, as follows.
E [X] =
X
ω∈Ω
pwX(ω) =
X
x∈ΩX
X
ω:X(ω)=x
pωX(ω) =
X
x∈ΩX
x
X
ω:X(ω)=x
pω
=
X
x∈ΩX
xP (X = x) .
Absolute convergence allows the sum to be taken in any order. But if
X
x∈ΩX
x≥0
xP (X = x) = ∞ and
X
x∈ΩX
x0
xP (X = x) = −∞
then E [X] is undefined. When defined, E [X] is always a constant.
If X is a positive random variable and if
P
ω∈Ω pωX(ω) = ∞ we write E [X] = ∞.
Example 7.2. We calculate the expectation of some standard distributions.
Poisson. If pX(r) = P (X = r) = (λr
/r!)e−λ
, r = 0, 1, . . . , then E [X] = λ.
E [X] =
∞
X
r=0
r
λr
r!
e−λ
= λe−λ
∞
X
r=1
λr−1
(r − 1)!
= λe−λ
eλ
= λ.
Binomial. If pX(r) = P(X = r) = n
r

pr
(1 − p)n−r
, r = 0, . . . , n, then E [X] = np.
E [X] =
n
X
r=0
rpr
(1 − p)n−r

n
r

=
n
X
r=0
r
n!
r!(n − r)!
pr
(1 − p)n−r
= np
n
X
r=1
(n − 1)!
(r − 1)!(n − r)!
pr−1
(1 − p)n−r
= np
n−1
X
r=0
(n − 1)!
r!(n − 1 − r)!
pr
(1 − p)n−1−r
= np
n−1
X
r=0

n − 1
r

pr
(1 − p)n−1−r
= np.
28
7.4 Function of a random variable
Composition of f : R → R and X defines a new random variable f(X) given by
f(X)(ω) = f(X(ω)).
Example 7.3. If a, b and c are constants, then a + bX and (X − c)2
are random
variables defined by
(a + bX)(ω) = a + bX(ω) and
(X − c)2
(ω) = (X(ω) − c)2
.
7.5 Properties of expectation
Theorem 7.4.
1. If X ≥ 0 then E [X] ≥ 0.
2. If X ≥ 0 and E [X] = 0 then P (X = 0) = 1.
3. If a and b are constants then E [a + bX] = a + bE [X].
4. For any random variables X, Y then E [X + Y ] = E [X] + E [Y ].
Properties 3 and 4 show that E is a linear operator.
5. E [X] is the constant which minimizes E
h
(X − c)
2
i
.
Proof. 1. X ≥ 0 means X(ω) ≥ 0 for all ω ∈ Ω. So E [X] =
X
ω∈Ω
pωX(ω) ≥ 0.
2. If there exists ω ∈ Ω with pω  0 and X(ω)  0 then E [X]  0, therefore
P (X = 0) = 1.
3. E [a + bX] =
X
ω∈Ω
(a + bX(ω)) pω = a
X
ω∈Ω
pω + b
X
ω∈Ω
pωX(ω) = a + bE [X] .
4.
X
ω
p(ω)[X(ω) + Y (ω)] =
X
ω
p(ω)X(ω) +
X
ω
p(ω)Y (ω)].
5.
E

(X − c)2

= E
h
(X − E [X] + E [X] − c)2
i
= E
h
(X − E [X])2
+ 2(X − E [X])(E [X] − c) + (E [X] − c)2
i
= E

(X − E [X])2

+ 2E

X − E [X]

(E [X] − c) + (E [X] − c)2
= E

(X − E [X])2

+ (E [X] − c)2
.
This is clearly minimized when c = E [X].
29
8 Further functions of random variables
Expectation of sum is sum of expectations. Variance. Variance of binomial, Poisson and
geometric random variables. Indicator random variable. Reproof of inclusion-exclusion
formula using indicator functions. *Zipf’s law*.
8.1 Expectation of sum is sum of expectations
Henceforth, random variables are assumed to be real-valued whenever the context
makes clear that this is required.
It is worth repeating Theorem 7.4, 4. This fact is very useful.
Theorem 8.1. For any random variables X1, X2, . . . , Xn, for which all the following
expectations exist,
E
 n
X
i=1
Xi
#
=
n
X
i=1
E [Xi] .
Proof.
X
ω
p(ω)
h
X1(ω) + · · · + Xn(ω)
i
=
X
ω
p(ω)X1(ω) + · · · +
X
ω
p(ω)Xn(ω).
8.2 Variance
The variance of a random variable X is defined as
Var X = E
h
(X − E[X])2
i
,
(which we below show = E

X2

− E [X]
2
). The standard deviation is
√
Var X.
Theorem 8.2 (Properties of variance).
(i) Var X ≥ 0. If Var X = 0, then P (X = E [X]) = 1.
Proof. From Theorem 7.4, properties 1 and 2.
(ii) If a, b are constants, Var (a + bX) = b2
Var X.
Proof.
Var(a + bX) = E

(a + bX − a − bE [X])2

= b2
E

(X − E [X])2

= b2
Var X.
(iii) Var X = E

X2

− E [X]
2
.
30
Proof.
E
h
(X − E[X])2
i
= E

X2
− 2XE [X] + (E [X])2

= E

X2

− 2E [X] E [X] + E [X]
2
= E

X2

− E [X]
2
Binomial. If X ∼ B(n, p) then Var(X) = np(1 − p).
E[X(X − 1)] =
n
X
r=0
r(r − 1)
n!
r!(n − r)!
pr
(1 − p)n−r
= n(n − 1)p2
n
X
r=2

n − 2
r − 2

pr−2
(1 − p)(n−2)−(r−2)
= n(n − 1)p2
.
Hence Var(X) = n(n − 1)p2
+ np − (np)2
= np(1 − p).
Poisson. If X ∼ P(λ) then Var(X) = λ (from the binomial, by letting p → 0,
np → λ.) See also proof in Lecture 12.
Geometric. If X has the geometric distribution P (X = r) = pqr
with r = 0, 1, · · ·
and p + q = 1, then E [X] = q/p and Var X = q/p2
.
E [X] =
∞
X
r=0
rpqr
= pq
∞
X
r=0
rqr−1
= pq
∞
X
r=0
d
dq
(qr
) = pq
d
dq
 1
1 − q

= pq(1 − q)−2
=
q
p
.
The r.v. Y = X + 1 with the ‘shifted-geometric distribution’ has E[Y ] = 1/p.
E

X2

=
∞
X
r=0
r2
pqr
= pq
∞
X
r=1
r(r + 1)qr−1
−
∞
X
r=1
rqr−1
!
= pq

2
(1 − q)3
−
1
(1 − q)2

=
2q
p2
−
q
p
Var X = E

X2

− E [X]
2
=
2q
p2
−
q
p
−
q2
p2
=
q
p2
.
Also, Var Y = q/p2
, since adding a constant does not change the variance.
31
8.3 Indicator random variables
The indicator function I[A] of an event A ⊂ Ω is the function
I[A](w) =
(
1, if ω ∈ A;
0, if ω /
∈ A.
(8.1)
I[A] is a random variable. It may also be written IA. It has the following properties.
1. E [I[A]] =
P
ω∈Ω pωI[A](w) = P (A).
2. I[Ac
] = 1 − I[A].
3. I[A ∩ B] = I[A]I[B].
4. I[A ∪ B] = I[A] + I[B] − I[A]I[B].
Proof.
I[A ∪ B](ω) = 1 if ω ∈ A or ω ∈ B
I[A ∪ B](ω) = I[A](ω) + I[B](ω) − I[A]I[B](ω)
Example 8.3. Suppose n ≥ 2 couples are seated at random around a table with men
and women alternating. Let N be the number of husbands seated next to their wives.
Calculate E [N] and the Var(N).
Let Ai = {couple i are together}.
N =
n
X
i=1
I[Ai]
E [N] = E
 n
X
i=1
I[Ai]
#
=
n
X
i=1
E

I[Ai]

=
n
X
i=1
2
n
= n
2
n
= 2
E

N2

= E


n
X
i=1
I[Ai]
!2

 = E


n
X
i=1
I[Ai]2
+ 2
X
ij
I[Ai]I[Aj]


= nE[I[Ai]2
] + n(n − 1)E (I[A1]I[A2])
E

I[Ai]2

= E [I[Ai]] =
2
n
E [(I[A1]I[A2])] = E [I[A1 ∩ A2]] = P (A1 ∩ A2) = P (A1) P (A2 | A1)
=
2
n

1
n − 1
1
n − 1
+
n − 2
n − 1
2
n − 1

Var N = E

N2

− E [N]
2
= n
2
n
+ n(n − 1)
2
n
2n − 3
(n − 1)2
− 22
=
2(n − 2)
n − 1
.
32
8.4 Reproof of inclusion-exclusion formula
Proof. Let Ij be an indicator variable for the event Aj. Let
Sr =
X
i1i2···ir
Ii1
Ii2
· · · Iir
sr = ESr =
X
i1i2···ir
P(Ai1
∩ · · · ∩ Air
).
Then
1 −
Qn
j=1(1 − Ij) = S1 − S2 + · · · + (−1)n−1
Sn
P
Sn
j=1 Aj

= E
h
1 −
Qn
j=1(1 − Ij)
i
= s1 − s2 + · · · + (−1)n−1
sn.
8.5 Zipf’s law
(Not examinable.) The most common word in English is the, which occurs about one-
tenth of the time in a typical text; the next most common word is of, which occurs
about one-twentieth of the time; and so forth. It appears that words occur in frequencies
proportional to their ranks. The following table is from Darwin’s Origin of Species.
This rule, called Zipf’s Law, has also been found to apply in such widely varying places
as the wealth of individuals, the size of cities, and the amount of traffic on webservers.
Suppose we have a social network of n people and the incremental value that a person
obtains from other people being part of a network varies as Zipf’s Law predicts. So the
total value that one person obtains is proportional to 1 + 1/2 + · · · + 1/(n − 1) ≈ log n.
Since there are n people, the total value of the social network is n log n.
This is empirically a better estimate of network value than Metcalfe’s Law, which posits
that the value of the network grows as n2
because each of n people can connect with
n − 1 others. It has been suggested that the misapplication of Metcalfe’s Law was a
contributor to the inflated pricing of Facebook shares.
33
9 Independent random variables
Independence of random variables and properties. Variance of a sum. Efron’s dice.
*Cycle lengths in a random permutation*. *Names in boxes problem*.
9.1 Independent random variables
Discrete random variables X1, . . . , Xn are independent if and only if for any x1, . . . , xn
P (X1 = x1, X2 = x2, . . . , Xn = xn) =
n
Y
i=1
P (Xi = xi) .
Theorem 9.1 (Preservation of independence). If X1, . . . , Xn are independent random
variables and f1, f2 . . . , fn are functions R → R then f1(X1), . . . , fn(Xn) are indepen-
dent random variables.
Proof.
P(f1(X1) = y1, . . . , fn(Xn) = yn) =
X
x1:f1(x1)=y1
·
·
xn:fn(xn)=yn
P (X1 = x1, . . . , Xn = xn)
=
n
Y
i=1
X
xi:fi(xi)=yi
P (Xi = xi) =
n
Y
i=1
P (fi(Xi) = yi) .
Theorem 9.2 (Expectation of a product). If X1, . . . , Xn are independent random
variables all of whose expectations exist then:
E
 n
Y
i=1
Xi
#
=
n
Y
i=1
E [Xi] .
Proof. Write Ri for RXi
(or ΩXi
), the range of Xi.
E
 n
Y
i=1
Xi
#
=
X
x1∈R1
· · ·
X
xn∈Rn
x1 · · · xnP (X1 = x1, X2 = x2, . . . , Xn = xn)
=
n
Y
i=1
X
xi∈Ri
xiP (Xi = xi)
!
=
n
Y
i=1
E [Xi] .
Notes.
(i) In Theorem 8.1 we had E[
Pn
i=1 Xi] =
Pn
i=1 EXi without requiring independence.
(ii) In general, Theorems 8.1 and 9.2 are not true if n is replaced by ∞.
34
Theorem 9.3. If X1, . . . , Xn are independent random variables, f1, . . . , fn are func-
tions R → R, and {E [fi(Xi)]}i all exist, then:
E
 n
Y
i=1
fi(Xi)
#
=
n
Y
i=1
E [fi(Xi)] .
Proof. This follow from the previous two theorems.
9.2 Variance of a sum
Theorem 9.4. If X1, . . . , Xn are independent random variables then:
Var
n
X
i=1
Xi
!
=
n
X
i=1
Var Xi.
Proof. In fact, we only need pairwise independence.
Var
n
X
i=1
Xi
!
= E


n
X
i=1
Xi
!2

 − E
n
X
i=1
Xi
!2
= E


X
i
X2
i +
X
i6=j
XiXj

 −
X
i
E[Xi]
!2
=
X
i
E

X2
i

+
X
i6=j
E [XiXj] −
X
i
E [Xi]
2
−
X
i6=j
E [Xi] E [Xj]
=
X
i

E

X2
i

− E [Xi]
2

=
n
X
i=1
Var Xi.
Corollary 9.5. If X1, . . . , Xn are independent identically distributed random variables
then
Var
1
n
n
X
i=1
Xi
!
=
1
n
Var Xi.
Proof.
Var
1
n
n
X
i=1
Xi
!
=
1
n2
Var
X
Xi =
1
n2
n
X
i=1
Var Xi =
1
n
Var Xi.
35
Example 9.6. If X1, . . . , Xn are independent, identically distributed (i.i.d.) Bernoulli
random variables, ∼ B(1, p), then Y = X1 + · · · + Xn is a binomial random variable,
∼ B(n, p).
Since Var(Xi) = EX2
i − (EXi)2
= p − p2
= p(1 − p), we have Var(Y ) = np(1 − p).
Example 9.7 [Experimental Design]. Two rods of unknown lengths a, b. A rule can
measure the length but with error having 0 mean (unbiased) and variance σ2
. Errors
are independent from measurement to measurement. To estimate a, b we could take
separate measurements A, B of each rod.
E [A] = a Var A = σ2
, E [B] = b Var B = σ2
Can we do better using two measurements? Yes! Measure a + b as X and a − b as Y
E [X] = a + b, Var X = σ2
E [Y ] = a − b, Var Y = σ2
E

X + Y
2

= a, Var

X + Y
2

=
1
2
σ2
E

X − Y
2

= b, Var

X − Y
2

=
1
2
σ2
So this is better.
9.3 Efron’s dice
Example 9.8 [Efron’s dice]. Consider nonstandard dice:
If each of the dice is rolled with respective outcomes A, B, C and D then
P(A  B) = P(B  C) = P(C  D) = P(D  A) =
2
3
.
It is good to appreciate that such non-transitivity can happen.
Of course we can define other ordering relations between random variables that are
transitive. The ordering defined by X ≥E Y iff EX ≥ EY , is called expectation
ordering. The ordering defined by X ≥ st Y iff P(X ≥ t) ≥ P(Y ≥ t) for all t is called
stochastic ordering. We say more about this in §17.3.
36
9.4 Cycle lengths in a random permutation
Any permutation of 1, 2, . . . , n can be decomposed into cycles. For example, if (1,2,3,4)
is permuted to (3,2,1,4) this is decomposed as (3,1) (2) and (4). It is the composition
of one 2-cycle and two 1-cycles.
• What is the probability that a given element lies in a cycle of length m (an
m-cycle)?
Answer:
n − 1
n
·
n − 2
n − 1
· · ·
n − m + 1
n − m + 2
·
1
n − m + 1
=
1
n
.
• What is the expected number of m-cycles?
Let Ii be an indicator for the event that i is in an m-cycle.
Answer:
1
m
E
Pn
i=1 Ii =
1
m
n
1
n
=
1
m
.
• Suppose m  n/2. Let pm be the probability that an m-cycle exists. Since there
can be at most one cycle of size m  n/2,
pm · 1 + (1 − pm) · 0 = E(number of m-cycles) =
1
m
=⇒ pm =
1
m
.
Hence the probability of some large cycle of size m  n/2 is
n
X
m:mn/2
pm ≤
1
dn/2e
+ · · · +
1
n
≈ log 2 = 0.6931.
Names in boxes problem. Names of 100 prisoners are placed in 100 wooden boxes,
one name to a box, and the boxes are lined up on a table in a room. One by one, the
prisoners enter the room; each may look in at most 50 boxes, but must leave the room
exactly as he found it and is permitted no further communication with the others.
The prisoners may plot their strategy in advance, and they are going to need it, because
unless every prisoner finds his own name all will subsequently be executed. Find a
strategy with which their probability of success exceeds 0.30.
Answer: The prisoners should use the following strategy. Prisoner i should start by
looking in box i. If he finds the name of prisoner i1 he should next look in box i1. He
continues in this manner looking through a sequence of boxes i, i1, i2, . . . , i49. His own
name is contained in the box which points to the box where he started, namely i, so he
will find his own name iff (in the random permutation of names in boxes) his name lies
in a cycle of length ≤ 50. Every prisoners will find his name in a cycle of length ≤ 50
provided there is no large cycle. This happens with probability of 1 − 0.6931  0.30.
37
10 Inequalities
Jensen’s, AM–GM and Cauchy-Schwarz inequalities. Covariance. X, Y independent
=⇒ Cov(X, Y ) = 0, but not conversely. *Information entropy*.
10.1 Jensen’s inequality
A function f : (a, b) → R is convex if for all x1, x2 ∈ (a, b) and λ1 ≥ 0, λ2 ≥ 0 with
λ1 + λ2 = 1,
λ1f(x1) + λ2f(x2) ≥ f(λ1x1 + λ2x2).
It is strictly convex if strict inequality holds when x1 6= x2 and 0  λ1  1.
x1 x2
λ1x1 + λ2x2
f(λ1x1 + λ2x2)
λ1f(x1) + λ2f(x2)
chord lies
above the
function.
A function f is concave (strictly concave) if −f is convex (strictly convex).
Fact. If f is a twice differentiable function and f00
(x) ≥ 0 for all x ∈ (a, b) then f is
convex [exercise in Analysis I]. It is strictly convex if f00
(x)  0 for all x ∈ (a, b).
Theorem 10.1 (Jensen’s inequality). Let f : (a, b) → R be a convex function. Then
n
X
i=1
pif(xi) ≥ f
n
X
i=1
pixi
!
for all x1, . . . , xn ∈ (a, b) and p1, . . . , pn ∈ (0, 1) such that
P
i pi = 1. Furthermore if f
is strictly convex then equality holds iff all the xi are equal.
Jensen’s inequality is saying that if X takes finitely many values then
E[f(X)] ≥ f(E[X]).
38
Proof. Use induction. The case n = 2 is the definition of convexity. Suppose that the
theorem is true for n − 1. Let p = (p1, . . . , pn) be a distribution (i.e. pi ≥ 0 for all i
and
P
i pi = 1). The inductive step that proves the theorem is true for n is
f(p1x1 + · · · + pnxn) = f

p1x1 + (p2 + · · · + pn)
p2x2 + · · · + pnxn
p2 + · · · + pn

≤ p1f(x1) + (p2 + · · · + pn)f

p2x2 + · · · + pnxn
p2 + · · · + pn

≤ p1f(x1) + (p2 + · · · + pn)
n
X
i=2
pi
p2 + · · · + pn
f(xi)
=
n
X
i=1
pif(xi).
10.2 AM–GM inequality
Corollary 10.2 (AM–GM inequality). Given positive real numbers x1, . . . , xn,
n
Y
i=1
xi
!1/n
≤
1
n
n
X
i=1
xi. (10.1)
Proof. The function f(x) = − log x is convex. Consider a random variable X such that
P(X = xi) = 1/n, i = 1, . . . , n. By using Jensen’s inequality, (10.1) follows because
Ef(X) ≥ f(EX) =⇒
1
n
X
i
− log xi ≥ − log
1
n
X
i
xi
!
.
10.3 Cauchy-Schwarz inequality
Theorem 10.3. For any random variables X and Y ,
E[XY ]2
≤ E[X2
]E[Y 2
].
Proof. Suppose EY 2
 0 (else Y = 0). Let W = X − Y E[XY ]/E[Y 2
].
E[W2
] = E[X2
] − 2
E[XY ]2
E[Y 2]
+
E[XY ]2
E[Y 2]
≥ 0,
from which the Cauchy-Schwarz inequality follows. Equality occurs only if W = 0.
39
From Paintings, Plane Tilings,  Proofs, R. B. Nelsen Lewis
10.4 Covariance and correlation
For two random variable X and Y , we define the covariance between X and Y as
Cov(X, Y ) = E[(X − EX)(Y − EY )].
Properties of covariance (easy to prove, so proofs omitted) are:
• If c is a constant,
· Cov(X, c) = 0,
· Cov(X + c, Y ) = Cov(X, Y ).
• Cov(X, Y ) = Cov(Y, X).
• Cov(X, Y ) = EXY − EXEY .
• Cov(X + Z, Y ) = Cov(X, Y ) + Cov(Z, Y ).
• Cov(X, X) = Var(X).
• Var(X + Y ) = Var(X) + Var(Y ) + 2 Cov(X, Y ).
• If X and Y are independent then Cov(X, Y ) = 0.
However, as the following example shows, the converse is not true.
40
Example 10.4. Suppose that (X, Y ) is equally likely to take three possible values
(2, 0), (−1, 1), (−1, −1) Then EX = EY = 0 and EXY = 0, so Cov(X, Y ) = 0. But
X = 2 ⇐⇒ Y = 0, so X and Y are not independent.
The correlation coefficient (or just the correlation) between random variables X and
Y with Var(X)  0 and Var(Y )  0 is
Corr(X, Y ) =
Cov(X, Y )
p
Var(X) Var(Y )
.
Corollary 10.5. | Corr(X, Y )| ≤ 1.
Proof. Apply Cauchy-Schwarz to X − EX and Y − EY .
10.5 Information entropy
Suppose an event A occurs with probability P(A) = p. How surprising is it? Let’s try
to invent a ‘surprise function’, say S(p). What properties should this have?
Since a certain event is unsurprising we would like S(1) = 0 . We should also like
S(p) to be decreasing and continuous in p. If A and B are independent events then we
should like S(P(A ∩ B)) = S(P(A)) + S(P(B)).
It turns out that the only function with these properties is one of the form
S(p) = −c loga p,
with c  0. Take c = 1, a = 2. If X is a random variable that takes values 1, . . . , n
with probabilities p1, . . . , pn then on average the surprise obtained on learning X is
H(X) = ES(pX) = −
X
i
pi log2 pi.
This is the information entropy of X. It is an important quantity in information
theory. The ‘log’ can be taken to any base, but using base 2, nH(X) is roughly the
expected number of binary bits required to report the result of n experiments in which
X1, . . . , Xn are i.i.d. observations from distribution (pi, 1 ≤ i ≤ n) and we encode our
reporting of the results of experiments in the most efficient way.
Let’s use Jensen’s inequality to prove the entropy is maximized by p1 = · · · = pn = 1/n.
Consider f(x) = − log x, which is a convex function. We may assume pi  0 for all i.
Let X be a r.v. such that Xi = 1/pi with probability pi. Then
−
n
X
i=1
pi log pi = −Ef(X) ≤ −f(EX) = −f(n) = log n = −
n
X
i=1
1
n log 1
n .
41
11 Weak law of large numbers
Markov and Chebyshev inequalities. Weak law of large numbers. *Weierstrass approx-
imation theorem*. *Benford’s law*.
11.1 Markov inequality
Theorem 11.1. If X is a random variable with E|X|  ∞ and a  0, then
P(|X| ≥ a) ≤
E|X|
a
.
Proof. I[{|X| ≥ a}] ≤ |X|/a (as the left-hand side is 0 or 1, and if 1 then the right-hand
side is at least 1). So
P(|X| ≥ a) = E
h
I[{|X| ≥ a}]
i
≤ E
h
|X|/a
i
=
E|X|
a
.
11.2 Chebyshev inequality
Theorem 11.2. If X is a random variable with EX2
 ∞ and   0, then
P(|X| ≥ ) ≤
E[X2
]
2
.
Proof. Similarly to the proof of the Markov inequality,
I[{|X| ≥ }] ≤
X2
2
.
Take expected value.
1. The result is “distribution free” because no assumption need be made about the
distribution of X (other than EX2
 ∞).
2. It is the “best possible” inequality, in the sense that for some X the inequality
becomes an equality. Take X = −, 0, and , with probabilities c/(22
), 1 − c/2
and c/(22
), respectively. Then
EX2
= c
P(|X| ≥ ) =
c
2
=
EX2
2
.
3. If µ = EX then applying the inequality to X − µ gives
P(|X − µ| ≥ ) ≤
Var X
2
.
42
11.3 Weak law of large numbers
Theorem 11.3 (WLLN). Let X1, X2, . . . be a sequence of independent identically dis-
tributed (i.i.d.) random variables with mean µ and variance σ2
 ∞. Let
Sn =
n
X
i=1
Xi.
Then
For all   0, P
Notes on probability 2
Notes on probability 2
Notes on probability 2
Sn
n
− µ
Notes on probability 2
Notes on probability 2
Notes on probability 2
≥ 

→ 0 as n → ∞.
We write this as
Sn
n
→p
µ,
which reads as ‘Sn/n tends in probability to µ’.
Proof. By Chebyshev’s inequality
P
Notes on probability 2
Notes on probability 2
Notes on probability 2
Sn
n
− µ
Notes on probability 2
Notes on probability 2
Notes on probability 2
≥ 

≤
E(Sn
n − µ)2
2
=
E(Sn − nµ)2
n22
(properties of expectation)
=
Var Sn
n22
(since ESn = nµ)
=
nσ2
n22
(since Var Sn = nσ2
)
=
σ2
n2
→ 0.
Remark. We cannot relax the requirement that X1, X2, . . . be independent. For
example, we could not take X1 = X2 = · · · , where X1 is equally likely to be 0 or 1.
Example 11.4. Repeatedly toss a coin that comes up heads with probability p. Let
Ai be the event that the ith toss is a head. Let Xi = I[Ai]. Then
Sn
n
=
number of heads
number of trials
.
Now µ = E[I[Ai]] = P(Ai) = p, so the WLLN states that
P
Notes on probability 2
Notes on probability 2
Notes on probability 2
Sn
n
− p
Notes on probability 2
Notes on probability 2
Notes on probability 2
≥ 

→ 0 as n → ∞,
which recovers the intuitive (or frequentist) interpretation of probability.
43
Strong law of large numbers Why is do we use the word ‘weak’? Because there
is also a ‘strong’ form of a law of large numbers, which is
P

Sn
n
→ µ as n → ∞

= 1.
This is not the same as the weak form. What does this mean? The idea is that ω ∈ Ω
determines 
Sn
n
, n = 1, 2, . . .

as a sequence of real numbers. Hence it either tends to µ or it does not.
P

ω :
Sn(ω)
n
→ µ as n → ∞

= 1.
We write this as
Sn
n
→a.s.
µ,
which is read as ‘Sn/n tends almost surely to µ’.
11.4 Probabilistic proof of Weierstrass approximation theorem
Theorem 11.5 (not examinable). If f is a continuous real-valued function on the
interval [0, 1] and   0, then there exists a polynomial function p such that
|p(x) − f(x)|   for all x ∈ [0, 1].
Proof. From Analysis I: A continuous function on [0, 1] is bounded. So assume, WLOG,
|f(x)| ≤ 1. From Analysis II: A continuous function on [0, 1] is uniformly continuous.
This means that there exists δ1, δ2, . . . such that if x, y ∈ [0, 1] and |x − y|  δm then
|f(x) − f(y)|  1/m.
We define the so-called Bernstein polynomials:
bk,n(x) = n
k

xk
(1 − x)n−k
, 0 ≤ k ≤ n.
Then take
pn(x) =
Pn
k=0 f(k/n)bk,n(x).
Fix an x ∈ [0, 1] and let X be a binomial random variable with distribution B(n, x).
Notice that pn(x) = E[f(X/n)]. Let A be the event {|f(X/n) − f(x)| ≥ 1/m}. Then
Notes on probability 2
Notes on probability 2
pn(x) − f(x)
Notes on probability 2
Notes on probability 2
=
Notes on probability 2
Notes on probability 2
E

f(X/n) − f(x)
Notes on probability 2
Notes on probability 2
≤ (1/m)P(Ac
) + E
h
|f(X/n) − f(x)|
Notes on probability 2
Notes on probability 2
A
i
P(A)
≤ (1/m) + 2P(A).
44
By using Chebyshev’s inequality and the fact that A ⊆ {|X/n − x| ≥ δm},
P(A) ≤ P(|X/n − x| ≥ δm) ≤
x(1 − x)
nδ2
m
≤
1
4nδ2
m
Now choose m and n large enough so that 1
m + 1
2nδ2
m
  and we have
Notes on probability 2
Notes on probability 2
pn(x)−f(x)
Notes on probability 2
Notes on probability 2
.
We have shown this for all x ∈ [0, 1].
11.5 Benford’s law
A set of numbers satisfies Benford’s law if the
probability that a number begins with the digit
k is log10
k+1
k

.
This is true, for example, of the Fibonacci num-
bers: {Fn} = {1, 1, 2, 3, 5, 8, . . . }.
Let Ak(n) be the number of the first n Fibonacci
numbers that begin with a k. See the table for
n = 10000. The fit is extremely good.
k Ak(10000) log10
k+1
k

1 3011 0.30103
2 1762 0.17609
3 1250 0.12494
4 968 0.09691
5 792 0.07918
6 668 0.06695
7 580 0.05799
8 513 0.05115
9 456 0.04576
‘Explanation’. Let α = 1
2 (1 +
√
5). If is well-known that when n is large, Fn ≈ αn
.
So Fn and αn
have the same first digit. A number m begins with the digit k if the
fractional part of log10 m lies in the interval [log10 k, log10(k + 1)). Let ]x[= x − bxc
denote the fractional part of x. A famous theorem of Weyl states the following: If β
is irrational, then the sequence of fractional parts {]nβ[}∞
n=1 is uniformly distributed.
This result is certainly very plausible, but a proper proof is beyond our scope. We
apply this with β = log10 α, noting that the fractional part of log10 Fn is then ]nβ[.
Benford’s law also arises when one is concerned with numbers whose measurement
scale is arbitrary. For example, whether we are measuring the areas of world lakes
in km2
or miles2
the distribution of the first digit should surely be the same. The
distribution of the first digit of X is determined by the distribution of the fractional
part of log10 X. Given a constant c, the distribution of the first digit of cX is determined
by the distribution of the fractional part of log10 cX = log10 X + log10 c. The uniform
distribution is the only distribution on [0, 1] that does not change when a constant is
added to it (mod 1). So if we are to have scale invariance then the fractional part of
log10 X must be uniformly distributed, and so must lie in [0, 0.3010] with probability
0.3010.
45
12 Probability generating functions
Distribution uniquely determined by p.g.f. Abel’s lemma. The p.g.f. of a sum of random
variables. Tilings. *Dyck words*.
12.1 Probability generating function
Consider a random variable X, taking values 0, 1, 2, . . . . Let pr = P (X = r), r =
0, 1, 2, . . . . The probability generating function (p.g.f.) of X, or of the distribution
(pr, r = 0, 1, 2, . . . ), is
p(z) = E

zX

=
∞
X
r=0
P (X = r) zr
=
∞
X
r=0
przr
.
Thus p(z) is a polynomial or a power series. As a power series it is convergent for
|z| ≤ 1, by comparison with a geometric series, and
|p(z)| ≤
X
r
pr |z|
r
≤
X
r
pr = 1.
We can write pX(z) when we wish to give a reminder that this is the p.g.f. of X.
Example 12.1 [A die].
pr = 1
6 , r = 1, . . . , 6,
p(z) = E

zX

= 1
6 z + z2
+ · · · + z6

= 1
6 z
1 − z6
1 − z
.
Theorem 12.2. The distribution of X is uniquely determined by the p.g.f. p(z).
Proof. We find p0 from p0 = p(0). We know that we can differentiate p(z) term by
term for |z| ≤ 1. Thus
p0
(z) = p1 + 2p2z + 3p3z2
+ · · ·
p0
(0) = p1.
Repeated differentiation gives
di
dzi
p(z) = p(i)
(z) =
∞
X
r=i
r!
(r − i)!
przr−i
and so p(i)
(0) = i!pi. Thus we can recover p0, p1, . . . from p(z).
Theorem 12.3 (Abel’s Lemma).
E [X] = lim
z→1
p0
(z).
46
Proof. First prove ‘≥’. For 0 ≤ z ≤ 1, p0
(z) is a nondecreasing function of z, and
p0
(z) =
∞
X
r=1
rprzr−1
≤
∞
X
r=1
rpr = E [X] ,
so p0
(z) is bounded above. Hence limz→1 p0
(z) ≤ E [X].
Now prove ‘≤’. Choose  ≥ 0. Let N be large enough that
PN
r=1 rpr ≥ E [X]−. Then
E [X] −  ≤
N
X
r=1
rpr = lim
z→1
N
X
r=1
rprzr−1
≤ lim
z→1
∞
X
r=1
rprzr−1
= lim
z→1
p0
(z).
Since this is true for all  ≥ 0, we have E [X] ≤ limz→1 p0
(z).
Usually, p0
(z) is continuous at z = 1, and then E [X] = p0
(1).
Similarly we have the following.
Theorem 12.4.
E[X(X − 1)] = lim
z→1
p00
(z).
Proof. Proof is the same as Abel’s Lemma but with
p00
(z) =
∞
X
r=2
r(r − 1)pzr−2
.
Example 12.5. Suppose X has the Poisson distribution with parameter λ.
P (X = r) =
λr
r!
e−λ
, r = 0, 1, . . . .
Then its p.g.f. is
E

zX

=
∞
X
r=0
zr λr
r!
e−λ
= e−λz
e−λ
= e−λ(1−z)
.
To calculate the mean and variance of X:
p0
(z) = λe−λ(1−z)
, p00
(z) = λ2
e−λ(1−z)
.
So
E [X] = lim
z→1
p0
(z) = p0
(1) = λ (since p0
(z) continuous at z = 1)
E [X(X − 1)] = p00
(1) = λ2
Var X = E

X2

− E [X]
2
= E [X(X − 1)] + E [X] − E [X]
2
= λ2
+ λ − λ2
= λ.
47
Theorem 12.6. Suppose that X1, X2, . . . , Xn are independent random variables with
p.g.fs p1(z), p2(z), . . . , pn(z). Then the p.g.f. of X1 + X2 + · · · + Xn is
p1(z)p2(z) · · · pn(z).
Proof.
E
h
zX1+X2+···+Xn
i
= E
h
zX1
zX2
· · · zXn
i
= E

zX1

E

zX2

· · · E

zXn

= p1(z)p2(z) · · · pn(z).
Example 12.7. Suppose X has a binomial distribution, B(n, p). Then
E

zX

=
n
X
r=0
P(X = r)zr
=
n
X
r=0

n
r

pr
(1 − p)n−r
zr
= (1 − p + pz)n
.
This proves that X has the same distribution as Y1 +Y2 +· · ·+Yn, where Y1, Y2, . . . , Yn
are i.i.d. Bernoulli random variables, each with
P (Yi = 0) = q = 1 − p, P (Yi = 1) = p, E

zYi

= (1 − p + pz).
Note. Whenever the p.g.f. factorizes it is useful to look to see if the random variable
can be written as a sum of other (independent) random variables.
Example 12.8. If X and Y are independently Poisson distributed with parameters λ
and µ then:
E

zX+Y

= E

zX

E

zY

= e−λ(1−z)
e−µ(1−z)
= e−(λ+µ)(1−z)
,
which is the p.g.f. of a Poisson random variable with parameter λ + µ. Since p.g.fs are
1–1 with distributions, X + Y is Poisson distributed with parameter λ + µ.
12.2 Combinatorial applications
Generating functions are useful in many other realms.
Tilings. How many ways can we tile a (2 × n) bathroom with (2 × 1) tiles?
Say fn, where
fn = fn−1 + fn−2 f0 = f1 = 1.
48
Let
F(z) =
∞
X
n=0
fnzn
.
fnzn
= fn−1zn
+ fn−2zn
=⇒
P∞
n=2 fnzn
=
P∞
n=2 fn−1zn
+
P∞
n=2 fn−2zn
and so, since f0 = f1 = 1,
F(z) − f0 − zf1 = z(F(z) − f0) + z2
F(z)
F(z)(1 − z − z2
) = f0(1 − z) + zf1 = 1 − z + z = 1.
Thus F(z) = (1 − z − z2
)−1
. Let
α1 = 1
2 (1 +
√
5) α2 = 1
2 (1 −
√
5),
F(z) =
1
(1 − α1z)(1 − α2z)
=
1
α1 − α2

α1
(1 − α1z)
−
α2
(1 − α2z)

=
1
α1 − α2
(α1
P∞
n=0 αn
1 zn
− α2
P∞
n=0 αn
2 zn
) .
The coefficient of zn
, that is fn, is the Fibonacci number
fn =
1
α1 − α2
(αn+1
1 − αn+1
2 ).
Dyck words. There are 5 Dyck words of length 6:
()()(), (())(), ()(()), ((())), (()()).
In general, a Dyck word of length 2n is a balanced string of n ‘(’ and n ‘)’.
Let Cn be the number of Dyck words of length 2n. What is this?
In general, w = (w1)w2, where w, w1, w2 are Dyck words.
So Cn+1 =
Pn
i=0 CiCn−i, taking C0 = 1.
Let c(x) =
P∞
n=0 Cnxn
. Then c(x) = 1 + xc(x)2
. So
c(x) =
1 −
√
1 − 4x
2x
=
∞
X
n=0

2n
n

xn
n + 1
.
Cn = 1
n+1
2n
n

is the nth Catalan number. It is the number of Dyck words of length
2n, and also has many applications in combinatorial problems. It is the number of
paths from (0, 0) to (2n, 0) that are always nonnegative, i.e. such that there are always
at least as many ups as downs (heads as tails). We will make use of this result in a
later discussion of random matrices in §24.3.
The first Catalan numbers for n = 0, 1, 2, 3, . . . are 1, 1, 2, 5, 14, 42, 132, 429, . . . .
49
13 Conditional expectation
Conditional distributions. Joint distribution. Conditional expectation and its proper-
ties. Marginals. The p.g.f. for the sum of a random number of terms. *Aggregate loss
and value at risk*. *Conditional entropy*.
13.1 Conditional distribution and expectation
Let X and Y be random variables (in general, not independent) with joint distribu-
tion
P (X = x, Y = y) .
Then the distribution of X is
P (X = x) =
X
y∈ΩY
P (X = x, Y = y) .
This is called the marginal distribution for X.
Assuming P(Y = y)  0, the conditional distribution for X given by Y = y is
P (X = x | Y = y) =
P (X = x, Y = y)
P (Y = y)
.
The conditional expectation of X given Y = y is,
E [X | Y = y] =
X
x∈ΩX
xP (X = x | Y = y) .
We can also think of E [X | Y ] as the random variable defined by
E [X | Y ] (ω) = E [X | Y = Y (ω)] .
Thus E [X | Y ] : Ω → ΩX, (or : Ω → R if X is real-valued).
Example 13.1. Let X1, X2, . . . , Xn be i.i.d. random variables, with Xi ∼ B(1, p), and
Y = X1 + X2 + · · · + Xn.
Then
P (X1 = 1 | Y = r) =
P (X1 = 1, Y = r)
P (Y = r)
=
P (X1 = 1, X2 + · · · + Xn = r − 1)
P (Y = r)
=
P (X1 = 1) P (X2 + · · · + Xn = r − 1)
P (Y = r)
=
p · n−1
r−1

pr−1
(1 − p)n−r
n
r

pr(1 − p)n−r
=
n−1
r−1

n
r
 =
r
n
.
50
So
E [X1 | Y = r] = 0 × P (X1 = 0 | Y = r) + 1 × P (X1 = 1 | Y = r) =
r
n
E [X1 | Y = Y (ω)] =
1
n
Y (ω)
and therefore
E [X1 | Y ] =
1
n
Y, which is a random variable, i.e. a function of Y .
13.2 Properties of conditional expectation
Theorem 13.2. If X and Y are independent then
E [X | Y ] = E [X] .
Proof. If X and Y are independent then for any y ∈ ΩY
E [X | Y = y] =
X
x∈ΩX
xP (X = x | Y = y) =
X
x∈ΩX
xP (X = x) = E [X] .
Theorem 13.3 (tower property of conditional expectation). For any two random
variables, X and Y ,
E

E [X | Y ]

= E [X] .
Proof.
E

E [X | Y ]

=
X
y
P (Y = y) E [X | Y = y]
=
X
y
P (Y = y)
X
x
xP (X = x | Y = y)
=
X
y
X
x
xP (X = x, Y = y)
= E [X] .
This is also called the law of total expectation. As a special case: if A1, . . . , An is
a partition of the sample space, then E[X] =
P
i:P (Ai)0 E[X | Ai]P(Ai).
13.3 Sums with a random number of terms
Example 13.4. Let X1, X2, . . . be i.i.d. with p.g.f. p(z). Let N be a random variable
independent of X1, X2, . . . with p.g.f. h(z). We now find the p.g.f. of
SN = X1 + X2 + · · · + XN .
51
E

zX1+···+XN

= E
h
E

zX1+···+XN
| N
i
=
∞
X
n=0
P (N = n) E

zX1+···+XN
| N = n

=
∞
X
n=0
P (N = n) (p(z))n
= h(p(z)).
Then for example
E [X1 + · · · + XN ] =
d
dz
h(p(z))

More Related Content

PPT
Lecture 1-aerial photogrammetry
PPTX
Trade of Jatamansi (Nardostachys grandilora) from Nepal in relation to CITES
PPTX
A presention on remote sensing & its application (1)
PDF
Kelas ii sd bahasa indonesia_tri novia
PDF
probability_stats_for_DS.pdf
PDF
Shreve, Steven - Stochastic Calculus for Finance I: The Binomial Asset Pricin...
PDF
Probability and Statistics by sheldon ross (8th edition).pdf
PDF
General physics
Lecture 1-aerial photogrammetry
Trade of Jatamansi (Nardostachys grandilora) from Nepal in relation to CITES
A presention on remote sensing & its application (1)
Kelas ii sd bahasa indonesia_tri novia
probability_stats_for_DS.pdf
Shreve, Steven - Stochastic Calculus for Finance I: The Binomial Asset Pricin...
Probability and Statistics by sheldon ross (8th edition).pdf
General physics

Similar to Notes on probability 2 (20)

PDF
PDF
phd_unimi_R08725
PDF
Manual Solution Probability and Statistic Hayter 4th Edition
PDF
Thats How We C
PDF
Introductory Statistics Explained.pdf
PDF
main-moonmath.pdf
PDF
Morton john canty image analysis and pattern recognition for remote sensing...
PDF
Statistics for economists
DOCX
Go to TOCStatistics for the SciencesCharles Peters.docx
PDF
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
PDF
Stochastic Processes and Simulations – A Machine Learning Perspective
PDF
Na 20130603
PDF
Methods for Applied Macroeconomic Research.pdf
PDF
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
PDF
An Introduction to Statistical Inference and Its Applications.pdf
PDF
2020-2021 EDA 101 Handout.pdf
PDF
Financial mathematics
PDF
Replect
PDF
Math for programmers
phd_unimi_R08725
Manual Solution Probability and Statistic Hayter 4th Edition
Thats How We C
Introductory Statistics Explained.pdf
main-moonmath.pdf
Morton john canty image analysis and pattern recognition for remote sensing...
Statistics for economists
Go to TOCStatistics for the SciencesCharles Peters.docx
Ric walter (auth.) numerical methods and optimization a consumer guide-sprin...
Stochastic Processes and Simulations – A Machine Learning Perspective
Na 20130603
Methods for Applied Macroeconomic Research.pdf
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
An Introduction to Statistical Inference and Its Applications.pdf
2020-2021 EDA 101 Handout.pdf
Financial mathematics
Replect
Math for programmers
Ad

Recently uploaded (20)

PDF
RMMM.pdf make it easy to upload and study
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PPTX
master seminar digital applications in india
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Updated Idioms and Phrasal Verbs in English subject
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Trump Administration's workforce development strategy
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
Classroom Observation Tools for Teachers
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Yogi Goddess Pres Conference Studio Updates
RMMM.pdf make it easy to upload and study
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
master seminar digital applications in india
Final Presentation General Medicine 03-08-2024.pptx
Orientation - ARALprogram of Deped to the Parents.pptx
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Updated Idioms and Phrasal Verbs in English subject
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
History, Philosophy and sociology of education (1).pptx
A systematic review of self-coping strategies used by university students to ...
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Trump Administration's workforce development strategy
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Weekly quiz Compilation Jan -July 25.pdf
Complications of Minimal Access Surgery at WLH
Classroom Observation Tools for Teachers
Microbial disease of the cardiovascular and lymphatic systems
Yogi Goddess Pres Conference Studio Updates
Ad

Notes on probability 2

  • 1. Probability About these notes. Many people have written excellent notes for introductory courses in probability. Mine draw freely on material prepared by others in present- ing this course to students at Cambridge. I wish to acknowledge especially Geoffrey Grimmett, Frank Kelly and Doug Kennedy. The order I follow is a bit different to that listed in the Schedules. Most of the material can be found in the recommended books by Grimmett & Welsh, and Ross. Many of the examples are classics and mandatory in any sensible introductory course on probability. The book by Grinstead & Snell is easy reading and I know students have enjoyed it. There are also some very good Wikipedia articles on many of the topics we will consider. In these notes I attempt a ‘Goldilocks path’ by being neither too detailed or too brief. • Each lecture has a title and focuses upon just one or two ideas. • My notes for each lecture are limited to 4 pages. I also include some entertaining, but nonexaminable topics, some of which are unusual for a course at this level (such as random permutations, entropy, reflection principle, Benford and Zipf distributions, Erdős’s probabilistic method, value at risk, eigenvalues of random matrices, Kelly criterion, Chernoff bound). You should enjoy the book of Grimmett & Welsh, and the notes notes of Kennedy. Printed notes, good or bad? I have wondered whether it is helpful or not to publish full course notes. On balance, I think that it is. It is helpful in that we can dispense with some tedious copying-out, and you are guaranteed an accurate account. But there are also benefits to hearing and writing down things yourself during a lecture, and so I recommend that you still do some of that. I will say things in every lecture that are not in the notes. I will sometimes tell you when it would be good to make an extra note. In learning mathematics repeated exposure to ideas is essential. I hope that by doing all of reading, listening, writing and (most importantly) solving problems you will master and enjoy this course. I recommend Tom Körner’s treatise on how to listen to a maths lecture. i
  • 2. Contents About these notes i Table of Contents ii Schedules vi Learning outcomes vii 1 Classical probability 1 1.1 Diverse notions of ‘probability’ . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Classical probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Sample space and events . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Equalizations in random walk . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Combinatorial analysis 6 2.1 Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Sampling with or without replacement . . . . . . . . . . . . . . . . . . . 6 2.3 Sampling with or without regard to ordering . . . . . . . . . . . . . . . 8 2.4 Four cases of enumerative combinatorics . . . . . . . . . . . . . . . . . . 8 3 Stirling’s formula 10 3.1 Multinomial coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 Improved Stirling’s formula . . . . . . . . . . . . . . . . . . . . . . . . . 13 4 Axiomatic approach 14 4.1 Axioms of probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Boole’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.3 Inclusion-exclusion formula . . . . . . . . . . . . . . . . . . . . . . . . . 17 5 Independence 18 5.1 Bonferroni’s inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5.2 Independence of two events . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.3 Independence of multiple events . . . . . . . . . . . . . . . . . . . . . . . 20 5.4 Important distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.5 Poisson approximation to the binomial . . . . . . . . . . . . . . . . . . . 21 6 Conditional probability 22 6.1 Conditional probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 6.2 Properties of conditional probability . . . . . . . . . . . . . . . . . . . . 22 6.3 Law of total probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6.4 Bayes’ formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6.5 Simpson’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 ii
  • 3. 7 Discrete random variables 26 7.1 Continuity of P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.2 Discrete random variables . . . . . . . . . . . . . . . . . . . . . . . . . . 27 7.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 7.4 Function of a random variable . . . . . . . . . . . . . . . . . . . . . . . . 29 7.5 Properties of expectation . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8 Further functions of random variables 30 8.1 Expectation of sum is sum of expectations . . . . . . . . . . . . . . . . . 30 8.2 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 8.3 Indicator random variables . . . . . . . . . . . . . . . . . . . . . . . . . 32 8.4 Reproof of inclusion-exclusion formula . . . . . . . . . . . . . . . . . . . 33 8.5 Zipf’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 9 Independent random variables 34 9.1 Independent random variables . . . . . . . . . . . . . . . . . . . . . . . . 34 9.2 Variance of a sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 9.3 Efron’s dice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 9.4 Cycle lengths in a random permutation . . . . . . . . . . . . . . . . . . 37 10 Inequalities 38 10.1 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 10.2 AM–GM inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 10.3 Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 39 10.4 Covariance and correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 40 10.5 Information entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 11 Weak law of large numbers 42 11.1 Markov inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 11.2 Chebyshev inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 11.3 Weak law of large numbers . . . . . . . . . . . . . . . . . . . . . . . . . 43 11.4 Probabilistic proof of Weierstrass approximation theorem . . . . . . . . 44 11.5 Benford’s law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 12 Probability generating functions 46 12.1 Probability generating function . . . . . . . . . . . . . . . . . . . . . . . 46 12.2 Combinatorial applications . . . . . . . . . . . . . . . . . . . . . . . . . 48 13 Conditional expectation 50 13.1 Conditional distribution and expectation . . . . . . . . . . . . . . . . . . 50 13.2 Properties of conditional expectation . . . . . . . . . . . . . . . . . . . . 51 13.3 Sums with a random number of terms . . . . . . . . . . . . . . . . . . . 51 13.4 Aggregate loss distribution and VaR . . . . . . . . . . . . . . . . . . . . 52 13.5 Conditional entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 iii
  • 4. 14 Branching processes 54 14.1 Branching processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 14.2 Generating function of a branching process . . . . . . . . . . . . . . . . 54 14.3 Probability of extinction . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 15 Random walk and gambler’s ruin 58 15.1 Random walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 15.2 Gambler’s ruin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 15.3 Duration of the game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 15.4 Use of generating functions in random walk . . . . . . . . . . . . . . . . 61 16 Continuous random variables 62 16.1 Continuous random variables . . . . . . . . . . . . . . . . . . . . . . . . 62 16.2 Uniform distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 16.3 Exponential distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 16.4 Hazard rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 16.5 Relationships among probability distributions . . . . . . . . . . . . . . . 65 17 Functions of a continuous random variable 66 17.1 Distribution of a function of a random variable . . . . . . . . . . . . . . 66 17.2 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 17.3 Stochastic ordering of random variables . . . . . . . . . . . . . . . . . . 68 17.4 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 17.5 Inspection paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 18 Jointly distributed random variables 70 18.1 Jointly distributed random variables . . . . . . . . . . . . . . . . . . . . 70 18.2 Independence of continuous random variables . . . . . . . . . . . . . . . 71 18.3 Geometric probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 18.4 Bertrand’s paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 18.5 Buffon’s needle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 19 Normal distribution 74 19.1 Normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 19.2 Calculations with the normal distribution . . . . . . . . . . . . . . . . . 75 19.3 Mode, median and sample mean . . . . . . . . . . . . . . . . . . . . . . 76 19.4 Distribution of order statistics . . . . . . . . . . . . . . . . . . . . . . . . 76 19.5 Stochastic bin packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 20 Transformations of random variables 78 20.1 Transformation of random variables . . . . . . . . . . . . . . . . . . . . 78 20.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 20.3 Cauchy distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 21 Moment generating functions 82 iv
  • 5. 21.1 What happens if the mapping is not 1–1? . . . . . . . . . . . . . . . . . 82 21.2 Minimum of exponentials is exponential . . . . . . . . . . . . . . . . . . 82 21.3 Moment generating functions . . . . . . . . . . . . . . . . . . . . . . . . 83 21.4 Gamma distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 21.5 Beta distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 22 Multivariate normal distribution 86 22.1 Moment generating function of normal distribution . . . . . . . . . . . . 86 22.2 Functions of normal random variables . . . . . . . . . . . . . . . . . . . 86 22.3 Bounds on tail probability of a normal distribution . . . . . . . . . . . . 87 22.4 Multivariate normal distribution . . . . . . . . . . . . . . . . . . . . . . 87 22.5 Bivariate normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 22.6 Multivariate moment generating function . . . . . . . . . . . . . . . . . 89 23 Central limit theorem 90 23.1 Central limit theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 23.2 Normal approximation to the binomial . . . . . . . . . . . . . . . . . . . 91 23.3 Estimating π with Buffon’s needle . . . . . . . . . . . . . . . . . . . . . 93 24 Continuing studies in probability 94 24.1 Large deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 24.2 Chernoff bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 24.3 Random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 24.4 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 A Problem solving strategies 98 B Fast Fourier transform and p.g.fs 100 C The Jacobian 101 D Beta distribution 103 E Kelly criterion 104 F Ballot theorem 105 G Allais paradox 106 H IB courses in applicable mathematics 107 Index 107 Richard Weber, Lent Term 2014 v
  • 6. This is reproduced from the Faculty handbook. Schedules All this material will be covered in lectures, but in a slightly different order. Basic concepts: Classical probability, equally likely outcomes. Combinatorial analysis, per- mutations and combinations. Stirling’s formula (asymptotics for log n! proved). [3] Axiomatic approach: Axioms (countable case). Probability spaces. Inclusion-exclusion formula. Continuity and subadditivity of probability measures. Independence. Binomial, Poisson and geometric distributions. Relation between Poisson and binomial distributions. Conditional probability, Bayes’ formula. Examples, including Simpson’s paradox. [5] Discrete random variables: Expectation. Functions of a random variable, indicator func- tion, variance, standard deviation. Covariance, independence of random variables. Generating functions: sums of independent random variables, random sum formula, moments. Conditional expectation. Random walks: gambler’s ruin, recurrence relations. Difference equations and their solution. Mean time to absorption. Branching processes: generating functions and ex- tinction probability. Combinatorial applications of generating functions. [7] Continuous random variables: Distributions and density functions. Expectations; expec- tation of a function of a random variable. Uniform, normal and exponential random variables. Memoryless property of exponential distribution. Joint distributions: transformation of ran- dom variables (including Jacobians), examples. Simulation: generating continuous random variables, independent normal random variables. Geometrical probability: Bertrand’s para- dox, Buffon’s needle. Correlation coefficient, bivariate normal random variables. [6] Inequalities and limits: Markov’s inequality, Chebyshev’s inequality. Weak law of large numbers. Convexity: Jensens inequality for general random variables, AM/GM inequality. Moment generating functions and statement (no proof) of continuity theorem. Statement of central limit theorem and sketch of proof. Examples, including sampling. [3] vi
  • 7. This is reproduced from the Faculty handbook. Learning outcomes From its origin in games of chance and the analysis of experimental data, probability theory has developed into an area of mathematics with many varied applications in physics, biology and business. The course introduces the basic ideas of probability and should be accessible to stu- dents who have no previous experience of probability or statistics. While developing the underlying theory, the course should strengthen students’ general mathematical back- ground and manipulative skills by its use of the axiomatic approach. There are links with other courses, in particular Vectors and Matrices, the elementary combinatorics of Numbers and Sets, the difference equations of Differential Equations and calculus of Vector Calculus and Analysis. Students should be left with a sense of the power of mathematics in relation to a variety of application areas. After a discussion of basic concepts (including conditional probability, Bayes’ formula, the binomial and Poisson distributions, and expectation), the course studies random walks, branching processes, geometric probability, simulation, sampling and the central limit theorem. Random walks can be used, for example, to represent the movement of a molecule of gas or the fluctuations of a share price; branching processes have applications in the modelling of chain reactions and epidemics. Through its treatment of discrete and continuous ran- dom variables, the course lays the foundation for the later study of statistical inference. By the end of this course, you should: • understand the basic concepts of probability theory, including independence, con- ditional probability, Bayes’ formula, expectation, variance and generating func- tions; • be familiar with the properties of commonly-used distribution functions for dis- crete and continuous random variables; • understand and be able to apply the central limit theorem. • be able to apply the above theory to ‘real world’ problems, including random walks and branching processes. vii
  • 8. 1 Classical probability Classical probability. Sample spaces. Equally likely outcomes. *Equalizations of heads and tails*. *Arcsine law*. 1.1 Diverse notions of ‘probability’ Consider some uses of the word ‘probability’. 1. The probability that a fair coin will land heads is 1/2. 2. The probability that a selection of 6 numbers wins the National Lottery Lotto jackpot is 1 in 49 6 =13,983,816, or 7.15112 × 10−8 . 3. The probability that a drawing pin will land ‘point up’ is 0.62. 4. The probability that a large earthquake will occur on the San Andreas Fault in the next 30 years is about 21%. 5. The probability that humanity will be extinct by 2100 is about 50%. Clearly, these are quite different notions of probability (known as classical1,2 , frequentist3 and subjective4,5 probability). Probability theory is useful in the biological, physical, actuarial, management and com- puter sciences, in economics, engineering, and operations research. It helps in modeling complex systems and in decision-making when there is uncertainty. It can be used to prove theorems in other mathematical fields (such as analysis, number theory, game theory, graph theory, quantum theory and communications theory). Mathematical probability began its development in Renaissance Europe when mathe- maticians such as Pascal and Fermat started to take an interest in understanding games of chance. Indeed, one can develop much of the subject simply by questioning what happens in games that are played by tossing a fair coin. We begin with the classical approach (lectures 1 and 2), and then shortly come to an axiomatic approach (lecture 4) which makes ‘probability’ a well-defined mathematical subject. 1.2 Classical probability Classical probability applies in situations in which there are just a finite number of equally likely possible outcomes. For example, tossing a fair coin or an unloaded die, or picking a card from a standard well-shuffled pack. 1
  • 9. Example 1.1 [Problem of points considered by Pascal, Fermat 1654]. Equally skilled players A and B play a series of games. The winner of each game gets a point. The winner is the first to reach 10 points. They are forced to stop early, when A has 8 points and B has 7 points, How should they divide the stake? Consider the next 4 games. Exactly one player must reach 10 points. There are 16 equally likely outcomes: A wins B wins AAAA AAAB AABB ABBB BBBB AABA ABBA BABB ABAA ABAB BBAB BAAA BABA BBBA BAAB BBAA Player A wins if she wins 4, 3 or 2 of the next 4 games (and loses if she wins only 1 or 0 games). She can win 4, 3 or 2 games in 1, 4 and 6 ways, respectively. There are 16 (= 2 × 2 × 2 × 2) possible results for the next 4 games. So P(A wins) = 11/16. It would seem fair that she should receive 11/16 of the stake. 1.3 Sample space and events Let’s generalise the above example. Consider an experiment which has a random out- come. The set of all possible outcomes is called the sample space. If the number of possible outcomes is countable we might list them, as ω1, ω2, . . . , and then the sample space is Ω = {ω1, ω2, . . .}. Choosing a particular point ω ∈ Ω provides an observation. Remark. A sample space need not be countable. For example, an infinite sequence of coin tosses like TTHTHHT. . . is in 1–1 relation to binary fractions like 0.0010110 . . . and the number of these is uncountable. Certain set theory notions have special meaning and terminology when used in the context of probability. 1. A subset A of Ω is called an event. 2. For any events A, B ∈ Ω, • The complement event Ac = Ω A is the event that A does not occur, or ‘not A’. This is also sometimes written as Ā or A0 . • A ∪ B is ‘A or B’. • A ∩ B is ‘A and B’. • A ⊆ B: occurrence of A implies occurrence of B. • A ∩ B = ∅ is ‘A and B are mutually exclusive or disjoint events’. 2
  • 10. As already mentioned, in classical probability the sample space consists of a finite number of equally likely outcomes, Ω = {ω1, . . . , ωN }. For A ⊆ Ω, P(A) = number of outcomes in A number of outcomes in Ω = |A| N . Thus (as Laplace put it) P(A) is the quotient of ‘number of favourable outcomes’ (when A occurs) divided by ‘number of possible outcomes’. Example 1.2. Suppose r digits are chosen from a table of random numbers. Find the probability that, for 0 ≤ k ≤ 9, (i) no digit exceeds k, and (ii) k is the greatest digit drawn. Take Ω = {(a1, . . . , ar) : 0 ≤ ai ≤ 9, i = 1, . . . , r}. Let Ak = [no digit exceeds k], or as a subset of Ω Ak = {(a1, . . . , ar) : 0 ≤ ai ≤ k, i = 1, . . . , r}. Thus |Ω| = 10r and |Ak| = (k + 1)r . So (i) P(Ak) = (k + 1)r /10r . Ak−1 Ak Ω (ii) The event that k is the greatest digit drawn is Bk = Ak Ak−1. So |Bk| = |Ak| − |Ak−1| and P(Bk) = (k + 1)r − kr 10r . 1.4 Equalizations in random walk In later lectures we will study random walks. Many interesting questions can be asked about the random path produced by tosses of a fair coin (+1 for a head, −1 for a tail). By the following example I hope to convince you that probability theory contains beautiful and surprising results. 3
  • 11. What is the probability that after an odd number of steps the walk is on the positive side of the x-axis? (Answer: obviously 1/2.) How many times on average does a walk of length n cross the x-axis? When does the first (or last) cross- ing of the x-axis typically occur? What is the distribution of termi- nal point? The walk at the left re- turned to the x-axis after 100 steps. How likely is this? Example 1.3. Suppose we toss a fair coin 2n times. We say that an equalization occurs at the (2k)th toss if there have been k heads and k tails. Let un be the probability that equalization occurs at the (2n)th toss (so there have been n heads and n tails). Here are two rare things that might happen when we toss a coin 2n times. • No equalization ever takes place (except at the start). • An equalization takes place at the end (exactly n heads and n tails). Which do you think is more likely? Let αk = P(there is no equalization after any of 2, 4, . . . , 2k tosses). Let uk = P(there is equalization after 2k tosses). The pictures at the right show results of tossing a fair coin 4 times, when the first toss is a head. Notice that of these 8 equally likely outcomes there are 3 that have no equalization ex- cept at the start, and 3 that have an equalization at the end. So α2 = u2 = 3/8 = 1 24 4 2 . We will prove that αn = un. 4
  • 12. Proof. We count the paths from the origin to a point T above the axis that do not have any equalization (except at the start). Suppose the first step is to a = (1, 1). Now we must count all paths from a to T, minus those that go from a to T but at some point make an equalization, such as the path shown in black below: a a′ T T′ (2n, 0) (0, 0) But notice that every such path that has an equalization is in 1–1 correspondence with a path from a0 = (1, −1) to T. This is the path obtained by reflecting around the axis the part of the path that takes place before the first equalization. The number of paths from a0 to T = (2n, k) equals the number from a to T0 = (2n, k+2). So the number of paths from a to some T 0 that have no equalization is X k=2,4,...,2n #[a → (2n, k)] − #[a → (2n, k + 2)] = #[a → (2n, 2)] = #[a → (2n, 0)]. We want twice this number (since the first step might have been to a0 ), which gives #[(0, 0) → (2n, 0)] = 2n n . So as claimed αn = un = 1 22n 2n n . Arcsine law. The probability that the last equalization occurs at 2k is therefore ukαn−k (since we must equalize at 2k and then not equalize at any of the 2n − 2k subsequent steps). But we have just proved that ukαn−k = ukun−k. Notice that therefore the last equalization occurs at 2n − 2k with the same probability. We will see in Lecture 3 that uk is approximately 1/ √ πk, so the last equalization is at time 2k with probability proportional to 1/ p k(n − k). The probability that the last equalization occurs before the 2kth toss is approximately Z 2k 2n 0 1 π 1 p x(1 − x) dx = (2/π) sin−1 p k/n. For instance, (2/π) sin−1 √ 0.15 = 0.2532. So the probability that the last equalization occurs during either the first or last 15% of the 2n coin tosses is about 0.5064 ( 1/2). This is a nontrivial result that would be hard to have guessed! 5
  • 13. 2 Combinatorial analysis Combinatorial analysis. Fundamental rules. Sampling with and without replacement, with and without regard to ordering. Permutations and combinations. Birthday prob- lem. Binomial coefficient. 2.1 Counting Example 2.1. A menu with 6 starters, 7 mains and 6 desserts has 6 × 7 × 6 = 252 meal choices. Fundamental rule of counting: Suppose r multiple choices are to be made in se- quence: there are m1 possibilities for the first choice; then m2 possibilities for the second choice; then m3 possibilities for the third choice, and so on until after making the first r − 1 choices there are mr possibilities for the rth choice. Then the total number of different possibilities for the set of choices is m1 × m2 × · · · × mr. Example 2.2. How many ways can the integers 1, 2, . . . , n be ordered? The first integer can be chosen in n ways, then the second in n − 1 ways, etc., giving n! = n(n − 1) · · · 1 ways (‘factorial n’). 2.2 Sampling with or without replacement Many standard calculations arising in classical probability involve counting numbers of equally likely outcomes. This can be tricky! Often such counts can be viewed as counting the number of lists of length n that can be constructed from a set of x items X = {1, . . . , x}. Let N = {1, . . . , n} be the set of list positions. Consider the function f : N → X. This gives the ordered list (f(1), f(2), . . . , f(n)). We might construct this list by drawing a sample of size n from the elements of X. We start by drawing an item for list position 1, then an item for list position 2, etc. List Items Position 1 1 2 2 2 2 2 3 3 n n − 1 x x 4 4 6
  • 14. 1. Sampling with replacement. After choosing an item we put it back so it can be chosen again. E.g. list (2, 4, 2, . . . , x, 2) is possible, as shown above. 2. Sampling without replacement. After choosing an item we set it aside. We end up with an ordered list of n distinct items (requires x ≥ n). 3. Sampling with replacement, but requiring each item is chosen at least once (re- quires n ≥ x). These three cases correspond to ‘any f’, ‘injective f’ and ‘surjective f’, respectively Example 2.3. Suppose N = {a, b, c}, X = {p, q, r, s}. How many different injective functions are there mapping N to X? Solution: Choosing the values of f(a), f(b), f(c) in sequence without replacement, we find the number of different injective f : N → X is 4 × 3 × 2 = 24. Example 2.4. I have n keys in my pocket. I select one at random and try it in a lock. If it fails I replace it and try again (sampling with replacement). P(success at rth trial) = (n − 1)r−1 × 1 nr . If keys are not replaced (sampling without replacement) P(success at rth trial) = (n − 1)! n! = 1 n , or alternatively = n − 1 n × n − 2 n − 1 × · · · × n − r + 1 n − r + 2 × 1 n − r + 1 = 1 n . Example 2.5 [Birthday problem]. How many people are needed in a room for it to be a favourable bet (probability of success greater than 1/2) that two people in the room will have the same birthday? Since there are 365 possible birthdays, it is tempting to guess that we would need about 1/2 this number, or 183. In fact, the number required for a favourable bet is only 23. To see this, we find the probability that, in a room with r people, there is no duplication of birthdays; the bet is favourable if this probability is less than one half. Let f(r) be probability that amongst r people there is a match. Then P(no match) = 1 − f(r) = 364 365 · 363 365 · 362 365 · · · · 366 − r 365 . So f(22) = 0.475695 and f(23) = 0.507297. Also f(47) = 0.955. Notice that with 23 people there are 23 2 = 253 pairs and each pair has a probability 1/365 of sharing a birthday. 7
  • 15. Remarks. A slightly different question is this: interrogating an audience one by one, how long will it take on average until we find a first birthday match? Answer: 23.62 (with standard deviation of 12.91). The probability of finding a triple with the same birthday exceeds 0.5 for n ≥ 88. (How do you think I computed that answer?) 2.3 Sampling with or without regard to ordering When counting numbers of possible f : N → X, we might decide that the labels that are given to elements of N and X do or do not matter. So having constructed the set of possible lists (f(1), . . . , f(n)) we might (i) leave lists alone (order matters); (ii) sort them ascending: so (2,5,4) and (4,2,5) both become (2,4,5). (labels of the positions in the list do not matter.) (iii) renumber each item in the list by the number of the draw on which it was first seen: so (2,5,2) and (5,4,5) both become (1,2,1). (labels of the items do not matter.) (iv) do both (ii) then (iii), so (2,5,2) and (8,5,5) both become (1,1,2). (no labels matter.) For example, in case (ii) we are saying that (g(1), . . . , g(n)) is the same as (f(1), . . . , f(n)) if there is permutation of π of 1, . . . , n, such that g(i) = f(π(i)). 2.4 Four cases of enumerative combinatorics Combinations of 1,2,3 (top of page 7) and (i)–(iv) above produce a ‘twelvefold way of enumerative combinatorics’, but involve the partition function and Bell numbers. Let’s consider just the four possibilities obtained from combinations of 1,2 and (i),(ii). 1(i) Sampling with replacement and with ordering. Each location in the list can be filled in x ways, so this can be done in xn ways. 2(i) Sampling without replacement and with ordering. Applying the funda- mental rule, this can be done in x(n) = x(x − 1) · · · (x − n + 1) ways. Another notation for this falling sequential product is xn (read as ‘x to the n falling’). In the special case n = x this is x! (the number of permutations of 1, 2, . . . , x). 2(ii) Sampling without replacement and without ordering. Now we care only which items are selected. (The positions in the list are indistinguishable.) This can be done in x(n)/n! = x n ways, i.e. the answer above divided by n!. 8
  • 16. This is of course the binomial coefficient, equal to the number of distinguishable sets of n items that can be chosen from a set of x items. Recall that x n is the coefficient of tn in (1 + t)x . (1 + t)(1 + t) · · · (1 + t) | {z } x times = x X n=0 x n tn . 1(ii) Sampling with replacement and without ordering. Now we care only how many times each item is selected. (The list positions are indistinguishable; we care only how many items of each type are selected.) The number of distinct f is the number of nonnegative integer solutions to n1 + n2 + · · · + nx = n. Consider n = 7 and x = 5. Think of marking off 5 bins with 4 dividers: |, and then placing 7 ∗s. One outcome is ∗ ∗ ∗ |{z} n1 | ∗ |{z} n2 | |{z} n3 | ∗ ∗ ∗ |{z} n4 | |{z} n5 which corresponds to n1 = 3, n2 = 1, n3 = 0, n4 = 3, n5 = 0. In general, there are x + n − 1 symbols and we are choosing n of them to be ∗. So the number of possibilities is x+n−1 n . Above we have attempted a systematic description of different types of counting prob- lem. However it is often best to just think from scratch, using the fundamental rule. Example 2.6. How may ways can k different flags be flown on m flag poles in a row if ≥ 2 flags may be on the same pole, and order from the top to bottom is important? There are m choices for the first flag, then m+1 for the second. Each flag added creates one more distinct place that the next flag might be added. So m(m + 1) · · · (m + k − 1) = (m + k − 1)! (m − 1)! . Remark. Suppose we have a diamond, an emerald, and a ruby. How many ways can we store these gems in identical small velvet bags? This is case of 1(iii). Think gems ≡ list positions; bags ≡ items. Take each gem, in sequence, and choose a bag to receive it. There are 5 ways: (1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (1, 2, 3). The 1,2,3 are the first, second and third bag to receive a gem. Here we have B(3) = 5 (the Bell numbers). 9
  • 17. 3 Stirling’s formula Multinomial coefficient. Stirling’s formula *and proof*. Examples of application. *Im- proved Stirling’s formula*. 3.1 Multinomial coefficient Suppose we fill successive locations in a list of length n by sampling with replacement from {1, . . . , x}. How may ways can this be done so that the numbers of times that each of 1, . . . , x appears in the list is n1, . . . , nx, respectively where P i ni = n? To compute this: we choose the n1 places in which ‘1’ appears in n n1 ways, then choose the n2 places in which ‘2’ appears in n−n1 n2 ways, etc. The answer is the multinomial coefficient n n1, . . . , nx := n n1 n − n1 n2 n − n1 − n2 n3 · · · n − n1 − · · · − nr−1 nx = n! n1!n2! · · · nx! , with the convention 0! = 1. Fact: (y1 + · · · + yx)n = X n n1, . . . , nx yn1 1 · · · ynx x , where the sum is over all n1, . . . , nx such that n1 +· · ·+nx = n. [Remark. The number of terms in this sum is n+x−1 x−1 , as found in §2.4, 1(ii).] Example 3.1. How may ways can a pack of 52 cards be dealt into bridge hands of 13 cards for each of 4 (distinguishable) players? Answer: 52 13, 13, 13, 13 = 52 13 39 13 26 13 = 52! (13!)4 . This is 53644737765488792839237440000= 5.36447 × 1028 . This is (4n)! n!4 evaluated at n = 13. How might we estimate it for greater n? Answer: (4n)! n!4 ≈ 28n √ 2(πn)3/2 (= 5.49496 × 1028 when n = 13). Interesting facts: (i) If we include situations that might occur part way through a bridge game then there are 2.05 × 1033 possible ‘positions’. 10
  • 18. (ii) The ‘Shannon number’ is the number of possible board positions in chess. It is roughly 1043 (according to Claude Shannon, 1950). The age of the universe is thought to be about 4 × 1017 seconds. 3.2 Stirling’s formula Theorem 3.2 (Stirling’s formula). As n → ∞, log n!en nn+1/2 = log( √ 2π) + O(1/n). The most common statement of Stirling’s formula is given as a corollary. Corollary 3.3. As n → ∞, n! ∼ √ 2πnn+ 1 2 e−n . In this context, ∼ indicates that the ratio of the two sides tends to 1. This is good even for small n. It is always a slight underestimate. n n! Approximation Ratio 1 1 .922 1.084 2 2 1.919 1.042 3 6 5.836 1.028 4 24 23.506 1.021 5 120 118.019 1.016 6 720 710.078 1.013 7 5040 4980.396 1.011 8 40320 39902.395 1.010 9 362880 359536.873 1.009 10 3628800 3598696.619 1.008 Notice that from the Taylor expansion of en = 1 + n + · · · + nn /n! + · · · we have 1 ≤ nn /n! ≤ en . We first prove the weak form of Stirling’s formula, namely that log(n!) ∼ n log n. Proof. (examinable) log n! = Pn 1 log k. Now Z n 1 log x dx ≤ n X 1 log k ≤ Z n+1 1 log x dx, and R z 1 log x dx = z log z − z + 1, and so n log n − n + 1 ≤ log n! ≤ (n + 1) log(n + 1) − n. Divide by n log n and let n → ∞ to sandwich log n! n log n between terms that tend to 1. Therefore log n! ∼ n log n. 11
  • 19. Now we prove the strong form. Proof. (not examinable) Some steps in this proof are like ‘pulling-a-rabbit-out-of-a-hat’. Let dn = log n!en nn+1/2 = log n! − n + 1 2 log(n) + n. Then with t = 1 2n+1 , dn − dn+1 = n + 1 2 log n + 1 n − 1 = 1 2t log 1 + t 1 − t − 1. Now for 0 t 1, if we subtract the second of the following expressions from the first: log(1 + t) − t = −1 2 t2 + 1 3 t3 − 1 4 t4 + · · · log(1 − t) + t = −1 2 t2 − 1 3 t3 − 1 4 t4 + · · · and divide by 2t, we get dn − dn+1 = 1 3 t2 + 1 5 t4 + 1 7 t6 + · · · ≤ 1 3 t2 + 1 3 t4 + 1 3 t6 + · · · = 1 3 t2 1 − t2 = 1 3 1 (2n + 1)2 − 1 = 1 12 1 n − 1 n + 1 . This shows that dn is decreasing and d1 − dn 1 12 (1 − 1 n ). So we may conclude dn d1 − 1 12 = 11 12 . By convergence of monotone bounded sequences, dn tends to a limit, say dn → A. For m n, dn − dm − 2 15 ( 1 2n+1 )4 + 1 12 ( 1 n − 1 m ), so we also have A dn A + 1 12n . It only remains to find A. Defining In, and then using integration by parts we have In := Z π/2 0 sinn θ dθ = − cos θ sinn−1 θ
  • 22. π/2 0 + Z π/2 0 (n − 1) cos2 θ sinn−2 θ dθ = (n − 1)(In−2 − In). So In = n−1 n In−2, with I0 = π/2 and I1 = 1. Therefore I2n = 1 2 · 3 4 · · · 2n − 1 2n · π 2 = (2n)! (2nn!)2 π 2 I2n+1 = 2 3 · 4 5 · · · 2n 2n + 1 = (2n n!)2 (2n + 1)! . 12
  • 23. For θ ∈ (0, π/2), sinn θ is decreasing in n, so In is also decreasing in n. Thus 1 ≤ I2n I2n+1 ≤ I2n−1 I2n+1 = 1 + 1 2n → 1. By using n! ∼ nn+1/2 e−n+A to evaluate the term in square brackets below, I2n I2n+1 = π(2n + 1) ((2n)!)2 24n+1(n!)4 ∼ π(2n + 1) 1 ne2A → 2π e2A , which is to equal 1. Therefore A = log( √ 2π) as required. Notice we have actually shown that n! = √ 2πnn+ 1 2 e−n e(n) = S(n)e(n) where 0 (n) 1 12n . For example, 10!/S(10) = 1.008365 and e1/120 = 1.008368. Example 3.4. Suppose we toss a fair coin 2n times. The probability of equal number of heads and tails is 2n n 22n = (2n)! [2n(n!)]2 . ≈ √ 2π(2n)2n+ 1 2 e−2n h 2n √ 2πnn+ 1 2 e−n i2 = 1 √ πn . For n = 13 this is 0.156478. The exact answer is 0.154981. Compare this to the probability of extracting 26 cards from a shuffled deck and obtain- ing 13 red and 13 black. That is 26 13 26 13 52 26 = 0.2181. Do you understand why this probability is greater? 3.3 Improved Stirling’s formula In fact, (see Robbins, A Remark on Stirling’s Formula, 1955), we have √ 2πnn+ 1 2 e−n+ 1 12n+1 n! √ 2πnn+ 1 2 e−n+ 1 12n . We have already proved the right hand part, dn A + 1 12n . The left hand part of this follows from dn − dn+1 1 3 t2 + 1 32 t4 + 1 33 t6 + · · · = t2 3 − t2 ≥ 1 12 1 n + 1 12 − 1 n + 1 + 1 12 , where one can check the final inequality using Mathematica. It implies dn −A 1 12n+1 . For n = 10, 1.008300 n!/S(n) 1.008368. 13
  • 24. 4 Axiomatic approach Probability axioms. Properties of P. Boole’s inequality. Probabilistic method in combinatorics. Inclusion-exclusion formula. Coincidence of derangements. 4.1 Axioms of probability A probability space is a triple (Ω, F, P), in which Ω is the sample space, F is a collection of subsets of Ω, and P is a probability measure P : F → [0, 1]. To obtain a consistent theory we must place requirements on F: F1: ∅ ∈ F and Ω ∈ F. F2: A ∈ F =⇒ Ac ∈ F. F3: A1, A2, . . . ∈ F =⇒ S∞ i=1 Ai ∈ F. Each A ∈ F is a possible event. If Ω is finite then we can take F to be the set of all subsets of Ω. But sometimes we need to be more careful in choosing F, such as when Ω is the set of all real numbers. We also place requirements on P: it is to be a real-valued function defined on F which satisfies three axioms (known as the Kolmogorov axioms): I. 0 ≤ P(A) ≤ 1, for all A ∈ F. II. P(Ω) = 1. III. For any countable set of events, A1, A2, . . . , which are disjoint (i.e. Ai ∩ Aj = ∅, i 6= j), we have P ( S i Ai) = P i P(Ai). P(A) is called the probability of the event A. Note. The event ‘two heads’ is typically written as {two heads} or [two heads]. One sees written P{two heads}, P({two heads}), P(two heads), and P for P. Example 4.1. Consider an arbitrary countable set Ω = {ω1, ω2, · · · } and an arbitrary collection (pi, p2, . . .) of nonnegative numbers with sum p1 + p2 + · · · = 1. Put P(A) = X i:ωi∈A pi. Then P satisfies the axioms. The numbers (p1, p2, . . .) are called a probability distribution.. 14
  • 25. Remark. As mentioned above, if Ω is not finite then it may not be possible to let F be all subsets of Ω. For example, it can be shown that it is impossible to define a P for all possible subsets of the interval [0, 1] that will satisfy the axioms. Instead we define P for special subsets, namely the intervals [a, b], with the natural choice of P([a, b]) = b − a. We then use F1, F2, F3 to construct F as the collection of sets that can be formed from countable unions and intersections of such intervals, and deduce their probabilities from the axioms. Theorem 4.2 (Properties of P). Axioms I–III imply the following further properties: (i) P(∅) = 0. (probability of the empty set) (ii) P(Ac ) = 1 − P(A). (iii) If A ⊆ B then P(A) ≤ P(B). (monotonicity) (iv) P(A ∪ B) = P(A) + P(B) − P(A ∩ B). (v) If A1 ⊆ A2 ⊆ A3 ⊆ · · · then P ( S∞ i=1 Ai) = lim n→∞ P(An). Property (v) says that P(·) is a continuous function. Proof. From II and III: P(Ω) = P(A ∪ Ac ) = P(A) + P(Ac ) = 1. This gives (ii). Setting A = Ω gives (i). For (iii) let B = A ∪ (B ∩ Ac ) so P(B) = P(A) + P(B ∩ Ac ) ≥ P(A). For (iv) use P(A ∪ B) = P(A) + P(B ∩ Ac ) and P(B) = P(A ∩ B) + P(B ∩ Ac ). Proof of (v) is deferred to §7.1. Remark. As a consequence of Theorem 4.2 (iv) we say that P is a subadditive set function, as it is one for which P(A ∪ B) ≤ P(A) + P(B), for all A, B. It is a also a submodular function, since P(A ∪ B) + P(A ∩ B) ≤ P(A) + P(B), for all A, B. It is also a supermodular function, since the reverse inequality is true. 4.2 Boole’s inequality Theorem 4.3 (Boole’s inequality). For any A1, A2, . . . , P ∞ [ i=1 Ai ! ≤ ∞ X i=1 P(Ai) special case is P n [ i=1 Ai ! ≤ n X i=1 P(Ai) # . 15
  • 26. Proof. Let B1 = A1 and Bi = Ai Si−1 k=1 Ak. Then B1, B2, . . . are disjoint and S k Ak = S k Bk. As Bi ⊆ Ai, P( S i Ai) = P( S i Bi) = P i P(Bi) ≤ P i P(Ai). Example 4.4. Consider a sequence of tosses of biased coins. Let Ak be the event that the kth toss is a head. Suppose P(Ak) = pk. The probability that an infinite number of heads occurs is P ∞ i=1 ∞ [ k=i Ak ! ≤ P ∞ [ k=i Ak ! ≤ pi + pi+1 + · · · (by Boole’s inequality). Hence if P∞ i=1 pi ∞ the right hand side can be made arbitrarily close to 0. This proves that the probability of seeing an infinite number of heads is 0. The reverse is also true: if P∞ i=1 pi = ∞ then P(number of heads is infinite) = 1. Example 4.5. The following result is due to Erdős (1947) and is an example of the so-called probabilistic method in combinatorics. Consider the complete graph on n vertices. Suppose for an integer k, n k 21−(k 2) 1. Then it is possible to color the edges red and blue so that no subgraph of k vertices has edges of just one colour. E.g. n = 200, k = 12. Proof. Colour the edges at random, each as red or blue. In any subgraph of k vertices the probability that every edge is red is 2−(k 2). There are n k subgraphs of k vertices. Let Ai be the event that the ith such subgraph has monochrome edges. P ( S i Ai) ≤ P i P(Ai) = n k · 2 · 2−(k 2) 1. So there must be at least one way of colouring the edges so that no subgraph of k vertices has only monochrome edges. Note. If n, k satisfy the above inequality then n + 1 is a lower bound on the answer to the ‘Party problem’, i.e. what is the minimum number of guests needed to guarantee there will be either k who all know one another, or k who are all strangers to one another? The answer is the Ramsey number, R(k, k). E.g. R(3, 3) = 6, R(4, 4) = 18. 16
  • 27. 4.3 Inclusion-exclusion formula Theorem 4.6 (Inclusion-exclusion). For any events A1, . . . , An, P ( Sn i=1 Ai) = n X i=1 P(Ai) − n X i1i2 P(Ai1 ∩ Ai2 ) + n X i1i2i3 P(Ai1 ∩ Ai2 ∩ Ai3 ) − · · · + (−1)n−1 P(A1 ∩ · · · ∩ An). (4.1) Proof. The proof is by induction. It is clearly true for n = 2. Now use P(A1 ∪ · · · ∪ An) = P(A1) + P(A2 ∪ · · · ∪ An) − P( Sn i=2(A1 ∩ Ai)) and then apply the inductive hypothesis for n − 1. Example 4.7 [Probability of derangement]. Two packs of cards are shuffled and placed on the table. One by one, two cards are simultaneously turned over from the top of the packs. What is the probability that at some point the two revealed cards are identical? This is a question about random permutations. A permutation of 1, . . . , n is called a derangement if no integer appears in its natural position. Suppose one of the n! permutations is picked at random. Let Ak be the event that k is in its natural position. By the inclusion-exclusion formula P ( S k Ak) = X k P(Ak) − X k1k2 P(Ak1 ∩ Ak2 ) + · · · + (−1)n−1 P(A1 ∩ · · · ∩ An) = n 1 n − n 2 1 n 1 n − 1 + n 3 1 n 1 n − 1 1 n − 2 − · · · + (−1)n−1 1 n! = 1 − 1 2! + 1 3! − · · · + (−1)n−1 1 n! ≈ 1 − e−1 . So the probability of at least one match is about 0.632. The probability that a randomly chosen permutation is a derangement is P( T k Ac k) ≈ e−1 = 0.368. Example 4.8. The formula can also be used to answer a question like “what is the number of surjections from a set of A of n elements to a set B of m ≤ n elements?” Answer. Let Ai be the set of those functions that do not have i ∈ B in their image. The number of functions that miss out any given set of k elements of B is (m − k)n . Hence the number of surjections is Sn,m = mn −
  • 34. 5 Independence Bonferroni inequalities. Independence. Important discrete distributions (Bernoulli, binomial, Poisson, geometric and hypergeometic). Poisson approximation to binomial. 5.1 Bonferroni’s inequalities Notation. We sometimes write P(A, B, C) to mean the same as P(A ∩ B ∩ C). Bonferroni’s inequalities say that if we truncate the sum on the right hand side of the inclusion-exclusion formula (4.1) so as to end with a positive (negative) term then we have an over- (under-) estimate of P( S i Ai). For example, P(A1 ∪ A2 ∪ A3) ≥ P(A1) + P(A2) + P(A3) − P(A1A2) − P(A2A3) − P(A1A3). Corollary 5.1 (Bonferroni’s inequalities). For any events A1, . . . , An, and for any r, 1 ≤ r ≤ n, P n [ i=1 Ai ! ≤ or ≥ n X i=1 P(Ai) − n X i1i2 P(Ai1 ∩ Ai2 ) + n X i1i2i3 P(Ai1 ∩ Ai2 ∩ Ai3 ) − · · · + (−1)r−1 X i1···ir P(Ai1 ∩ · · · ∩ Air ) as r is odd or even. Proof. Again, we use induction on n. For n = 2 we have P(A1 ∪A2) ≤ P(A1)+P(A2). You should be able to complete the proof using the fact that P A1 ∪ n [ i=2 Ai ! = P(A1) + P n [ i=2 Ai ! − P n [ i=2 (Ai ∩ A1) ! . Example 5.2. Consider Ω = {1, . . . , m}, with all m outcomes equally likely. Suppose xk ∈ {1, . . . , m} and let Ak = {1, 2, . . . , xk}. So P(Ak) = xk/m and P(Aj ∩ Ak) = min{xj, xk}/m. By applying Bonferroni inequalities we can prove results like max{x1, . . . , xn} ≥ X i xi − X ij min{xi, xj}, max{x1, . . . , xn} ≤ X i xi − X ij min{xi, xj} + X ijk min{xi, xj, xk}. 18
  • 35. 5.2 Independence of two events Two events A and B are said to be independent if P(A ∩ B) = P(A)P(B). Otherwise they are said to be dependent. Notice that if A and B are independent then P(A∩Bc ) = P(A)−P(A∩B) = P(A)−P(A)P(B) = P(A)(1−P(B)) = P(A)P(Bc ), so A and Bc are independent. Reapplying this result we see also that Ac and Bc are independent, and that Ac and B are independent. Example 5.3. Two fair dice are thrown. Let A1 (A2) be the event that the first (second) die shows an odd number. Let A3 be the event that the sum of the two numbers is odd. Are A1 and A2 independent? Are A1 and A3 independent? Solution. We first calculate the probabilities of various events. Event Probability A1 18 36 = 1 2 A2 as above, 1 2 A3 6×3 36 = 1 2 Event Probability A1 ∩ A2 3×3 36 = 1 4 A1 ∩ A3 3×3 36 = 1 4 A1 ∩ A2 ∩ A3 0 Thus by a series of multiplications, we can see that A1 and A2 are independent, A1 and A3 are independent (also A2 and A3). Independent experiments. The idea of 2 independent events models that of ‘2 independent experiments’. Consider Ω1 = {α1, . . . } and Ω2 = {β1, . . . } with associated probability distributions {p1, . . . } and {q1, . . . }. Then, by ‘2 independent experiments’, we mean the sample space Ω1 × Ω2 with probability distribution P ((αi, βj)) = piqj. Now, suppose A ⊂ Ω1 and B ⊂ Ω2. The event A can be interpreted as an event in Ω1 × Ω2, namely A × Ω2, and similarly for B. Then P (A ∩ B) = X αi∈A βj ∈B piqj = X αi∈A pi X βj ∈B qj = P (A) P (B) , which is why they are called ‘independent’ experiments. The obvious generalisation to n experiments can be made, but for an infinite sequence of experiments we mean a sample space Ω1 × Ω2 × . . . satisfying the appropriate formula for all n ∈ N. 19
  • 36. 5.3 Independence of multiple events Events A1, A2, . . . are said to be independent (or if we wish to emphasise ‘mutually independent’) if for all i1 i2 · · · ir, P(Ai1 ∩ Ai2 ∩ · · · ∩ Air ) = P(Ai1 )P(Ai2 ) · · · P(Air ). Events can be pairwise independent without being (mutually) independent. In Example 5.3, P(A1) = P(A2) = P(A3) = 1/2 but P(A1 ∩ A2 ∩ A3) = 0. So A1, A2 and A3 are not independent. Here is another such example: Example 5.4. Roll three dice. Let Aij be the event that dice i and j show the same. P(A12 ∩ A13) = 1/36 = P(A12)P(A13). But P(A12 ∩ A13 ∩ A23) = 1/36 6= P(A12)P(A13)P(A23). 5.4 Important distributions As in Example 4.1, consider a sample space Ω = {ω1, ω2, . . . } (which may be finite or countable). For each ωi ∈ Ω let pi = P({ωi}). Then pi ≥ 0, for all i, and X i pi = 1. (5.1) A sequence {pi}i=1,2,... satisfying (5.1) is called a probability distribution.. Example 5.5. Consider tossing a coin once, with possible outcomes Ω = {H, T}. For p ∈ [0, 1], the Bernoulli distribution, denoted B(1, p), is P(H) = p, P(T) = 1 − p. Example 5.6. By tossing the above coin n times we obtain a sequence of Bernoulli trials. The number of heads obtained is an outcome in the set Ω = {0, 1, 2, . . . , n}. The probability of HHT · · · T is ppq · · · q. There are n k ways in which k heads occur, each with probability pk qn−k . So P(k heads) = pk = n k pk (1 − p)n−k , k = 0, 1, . . . , n. This is the binomial distribution, denoted B(n, p). Example 5.7. Suppose n balls are tossed independently into k boxes such that the probability that a given ball goes in box i is pi. The probability that there will be n1, . . . , nk balls in boxes 1, . . . , k, respectively, is n! n1!n2! · · · nk! pn1 1 · · · pnk k , n1 + · · · + nk = n. This is the multinomial distribution. 20
  • 37. Example 5.8. Consider again an infinite sequence of Bernoulli trials, with P(success) = 1 − P(failure) = p. The probability that the first success occurs after exactly k failures is pk = p(1 − p)k , k = 0, 1, . . .. This is the geometric distribution with parameter p. Since P∞ 0 pr = 1, the probability that every trial is a failure is zero. [You may sometimes see ‘geometric distribution’ used to mean the distribution of the trial on which the first success occurs. Then pk = p(1 − p)k−1 , k = 1, 2, . . . .] The geometric distribution has the memoryless property (but we leave discussion of this until we meet the exponential distribution in §16.3). Example 5.9. Consider an urn with n1 red balls and n2 black balls. Suppose n balls are drawn without replacement, n ≤ n1 + n2. The probability of drawing exactly k red balls is given by the hypergeometric distribution pk = n1 k n2 n−k n1+n2 n , max(0, n − n2) ≤ k ≤ min(n, n1). 5.5 Poisson approximation to the binomial Example 5.10. The Poisson distribution is often used to model the number of occurrences of some event in a specified time, such as the number of insurance claims suffered by an insurance company in a year. Denoted P(λ), the Poisson distribution with parameter λ 0 is pk = λk k! e−λ , k = 0, 1, . . . . Theorem 5.11 (Poisson approximation to the binomial). Suppose that n → ∞ and p → 0 such that np → λ. Then n k pk (1 − p)n−k → λk k! e−λ , k = 0, 1, . . . . Proof. Recall that (1 − a n )n → e−a as n → ∞. For convenience we write p rather than p(n). The probability that exactly k events occur is qk = n k pk (1 − p)n−k = 1 k! n(n − 1) · · · (n − k + 1) nk (np)k 1 − np n n−k → 1 k! λk e−λ , k = 0, 1, . . . since p = p(n) is such that as np(n) → λ. Remark. Each of the distributions above is called a discrete distribution because it is a probability distribution over an Ω which is finite or countable. 21
  • 38. 6 Conditional probability Conditional probability, Law of total probability, Bayes’s formula. Screening test. Simpson’s paradox. 6.1 Conditional probability Suppose B is an event with P(B) 0. For any event A ⊆ Ω, the conditional probability of A given B is P(A | B) = P(A ∩ B) P(B) , i.e. the probability that A has occurred if we know that B has occurred. Note also that P(A ∩ B) = P(A | B)P(B) = P(B | A)P(A). If A and B are independent then P(A | B) = P(A ∩ B) P(B) = P(A)P(B) P(B) = P(A). Also P(A | Bc ) = P(A). So knowing whether or not B occurs does not affect probability that A occurs. Example 6.1. Notice that P(A | B) P(A) ⇐⇒ P(B | A) P(B). We might say that A and B are ‘attractive’. The reason some card games are fun is because ‘good hands attract’. In games like poker and bridge, ‘good hands’ tend to be those that have more than usual homogeneity, like ‘4 aces’ or ‘a flush’ (5 cards of the same suit). If I have a good hand, then the remainder of the cards are more homogeneous, and so it is more likely that other players will also have good hands. For example, in poker the probability of a royal flush is 1.539 × 10−6 . The probability the player on my right has a royal flush, given that I have looked at my cards and seen a royal flush is 1.959 × 10−6 , i.e. 1.27 times greater than before I looked at my cards. 6.2 Properties of conditional probability Theorem 6.2. 1. P (A ∩ B) = P (A | B) P (B), 2. P (A ∩ B ∩ C) = P (A | B ∩ C) P (B | C) P (C), 3. P (A | B ∩ C) = P (A∩B|C) P (B|C) , 22
  • 39. 4. the function P (◦ | B) restricted to subsets of B is a probability function on B. Proof. Results 1 to 3 are immediate from the definition of conditional probability. For result 4, note that A ∩ B ⊂ B, so P (A ∩ B) ≤ P (B) and thus P (A | B) ≤ 1. P (B | B) = 1 (obviously), so it just remains to show the Axiom III. For A1, A2, . . . which are disjoint events and subsets of B, we have P [ i Ai
  • 43. B ! = P( S i Ai ∩ B) P(B) = P( S i Ai) P(B) = P i P(Ai) P(B) = P i P(Ai ∩ B) P(B) = X i P(Ai | B). 6.3 Law of total probability A (finite or countable) collection {Bi}i of disjoint events such that S i Bi = Ω is said to be a partition of the sample space Ω. For any event A, P(A) = X i P(A ∩ Bi) = X i P(A | Bi)P(Bi) where the second summation extends only over Bi for which P(Bi) 0. Example 6.3 [Gambler’s ruin]. A fair coin is tossed repeatedly. At each toss the gambler wins £1 for heads and loses £1 for tails. He continues playing until he reaches £a or goes broke. Let px be the probability he goes broke before reaching a. Using the law of total probability: px = 1 2 px−1 + 1 2 px+1, with p0 = 1, pa = 0. Solution is px = 1 − x/a. 6.4 Bayes’ formula Theorem 6.4 (Bayes’ formula). Suppose {Bi}i is a partition of the sample space and A is an event for which P(A) 0. Then for any event Bj in the partition for which P(Bj) 0, P(Bj | A) = P(A | Bj)P(Bj) P i P(A | Bi)P(Bi) where the summation in the denominator extends only over Bi for which P(Bi) 0. 23
  • 44. Example 6.5 [Screening test]. A screening test is 98% effective in detecting a certain disease when a person has the disease. However, the test yields a false positive rate of 1% of the healthy persons tested. If 0.1% of the population have the disease, what is the probability that a person who tests positive has the disease? P(+ | D) = 0.98, P(+ | Dc ) = 0.01, P(D) = 0.001. P(D | +) = P(+ | D)P(D) P(+ | D)P(D) + P(+ | Dc)P(Dc) = 0.98 × 0.001 0.98 × 0.001 + 0.01 × 0.999 ≈ 0.09. Thus of persons who test positive only about 9% have the disease. Example 6.6 [Paradox of the two children]. (i) I have two children one of whom is a boy. (ii) I have two children one of whom is a boy born on a Thursday. Find in each case the probability that both are boys. In case (i) P(BB | BB ∪ BG) = P(BB) P(BB ∪ BG) = 1 4 1 4 + 21 2 1 2 = 1 3 . In case (ii), a child can be a girl (G), a boy born on Thursday (B∗ ) or a boy not born on a Thursday (B). P(B∗ B∗ ∪ BB∗ | B∗ B∗ ∪ B∗ B ∪ B∗ G) = P(B∗ B∗ ∪ BB∗ ) P(B∗B∗ ∪ B∗B ∪ B∗G) = 1 14 1 14 + 2 1 14 6 14 1 14 1 14 + 2 1 14 6 14 + 2 1 14 1 2 = 13 27 . 6.5 Simpson’s paradox Example 6.7 [Simpson’s paradox]. One example of conditional probability that ap- pears counter-intuitive when first encountered is the following situation. In practice, it arises frequently. Consider one individual chosen at random from 50 men and 50 women applicants to a particular College. Figures on the 100 applicants are given in the following table indicating whether they were educated at a state school or at an independent school and whether they were admitted or rejected. All applicants Admitted Rejected % Admitted State 25 25 50% Independent 28 22 56% 24
  • 45. Note that overall the probability that an applicant is admitted is 0.53, but conditional on the candidate being from an independent school the probability is 0.56 while condi- tional on being from a state school the probability is lower at 0.50. Suppose that when we break down the figures for men and women we have the following figures. Men applicants Admitted Rejected % Admitted State 15 22 41% Independent 5 8 38% Women applicants Admitted Rejected % Admitted State 10 3 77% Independent 23 14 62% It may now be seen that now for both men and women the conditional probability of being admitted is higher for state school applicants, at 0.41 and 0.77, respectively. Simpson’s paradox is not really a paradox, since we can explain it. Here is a graphical representation. Scatterplot of correlation between two continuous variables X and Y , grouped by a nominal variable Z. Different col- ors represent different levels of Z. It can also be understood from the fact that A B a b and C D c d does not imply A + C B + D a + c b + d . E.g. {a, b, c, d, A, B, C, D} = {10, 10, 80, 10, 10, 5, 11, 1}. Remark. It is appropriate for Cambridge students to know that this phenomenon was actually first recorded by Udny Yule (a fellow of St John’s College) in 1903. It is sometimes called the Yule-Simpson effect. 25
  • 46. 7 Discrete random variables Probability is a continuous set function. Definition of a discrete random variable. Dis- tributions. Expectation. Expectation of binomial and Poisson. Function of a random variable. Properties of expectation. 7.1 Continuity of P A sequence of events A1, A2, . . . is increasing (or decreasing) if A1 ⊂ A2 ⊂ · · · (or A1 ⊃ A2 ⊃ · · · ). We can define a limiting event lim n→∞ An = ∞ [ 1 An or = ∞ 1 An ! . Theorem 7.1. If A1, A2, . . . is an increasing or decreasing sequence of events then lim n→∞ P(An) = P( lim n→∞ An). Proof. Suppose A1, A2, . . . is an increasing sequence. Define Bn for n ≥ 1 B1 = A1 Bn = An n−1 [ i=1 Ai ! = An ∩ Ac n−1. (Bn, n ≥ 1) are disjoint events and ∞ [ i=1 Ai = ∞ [ i=1 Bi, n [ i=1 Ai = n [ i=1 Bi P ∞ [ i=1 Ai ! = P ∞ [ i=1 Bi ! = ∞ X 1 P (Bi) (axiom III) = lim n→∞ n X 1 P (Bi) = lim n→∞ P n [ i=1 Ai ! (axiom III) = lim n→∞ P (An) 26
  • 47. Thus P lim n→∞ An = lim n→∞ P (An) . If A1, A2, . . . is a decreasing sequence then Ac 1, Ac 2, . . . is an increasing sequence. Hence P lim n→∞ Ac n = lim n→∞ P (Ac n) . Use limn→∞ Ac n = (limn→∞ An)c . Thus probability is a continuous set function. 7.2 Discrete random variables A random variable (r.v.) X, taking values in a set ΩX, is a function X : Ω → ΩX. Typically X(ω) is a real number, but it might be a member of a set, like ΩX = {H, T}. A r.v. is said to be a discrete random variable if ΩX is finite or countable. For any T ⊆ ΩX we let P(X ∈ T) = P({ω : X(ω) ∈ T}). In particular, for each x ∈ ΩX, P (X = x) = P ω:X(ω)=x pω. The distribution or probability mass function (p.m.f.) of the r.v. X is (P (X = x) , x ∈ ΩX). It is a probability distribution over ΩX. For example, if X is the number shown by the roll of a fair die, its distribution is (P(X = i) = 1/6, i = 1, . . . , 6). We call this the discrete uniform distribution over {1, . . . , 6}. Rolling a die twice, so Ω = {(i, j), 1 ≤ i, j ≤ 6}, we might then define random variables X and Y by X(i, j) = i + j and Y (i, j) = max{i, j}. Here ΩX = {i, 2 ≤ i ≤ 12}. Remark. It can be useful to put X as a subscript on p, as a reminder of the variable whose distribution this is; we write pX(x) = P(X = x). Also, we use the notation X ∼ B(n, p), for example, to indicate that X has the B(n, p) distribution. Remark. The terminology ‘random variable’ is somewhat inaccurate, since a random variable is neither random nor a variable. The word ‘random’ is appropriate because the domain of X is Ω, and we have a probability measure on subsets of Ω. Thereby we can compute P(X ∈ T) = P({ω : X(ω) ∈ T}) for any T such that {ω : X(ω) ∈ T} ∈ F. 7.3 Expectation The expectation (or mean) of a real-valued random variable X exists, and is equal to the number E [X] = X ω∈Ω pwX(ω), 27
  • 48. provided that this sum is absolutely convergent. In practice it is calculated by summing over x ∈ ΩX, as follows. E [X] = X ω∈Ω pwX(ω) = X x∈ΩX X ω:X(ω)=x pωX(ω) = X x∈ΩX x X ω:X(ω)=x pω = X x∈ΩX xP (X = x) . Absolute convergence allows the sum to be taken in any order. But if X x∈ΩX x≥0 xP (X = x) = ∞ and X x∈ΩX x0 xP (X = x) = −∞ then E [X] is undefined. When defined, E [X] is always a constant. If X is a positive random variable and if P ω∈Ω pωX(ω) = ∞ we write E [X] = ∞. Example 7.2. We calculate the expectation of some standard distributions. Poisson. If pX(r) = P (X = r) = (λr /r!)e−λ , r = 0, 1, . . . , then E [X] = λ. E [X] = ∞ X r=0 r λr r! e−λ = λe−λ ∞ X r=1 λr−1 (r − 1)! = λe−λ eλ = λ. Binomial. If pX(r) = P(X = r) = n r pr (1 − p)n−r , r = 0, . . . , n, then E [X] = np. E [X] = n X r=0 rpr (1 − p)n−r n r = n X r=0 r n! r!(n − r)! pr (1 − p)n−r = np n X r=1 (n − 1)! (r − 1)!(n − r)! pr−1 (1 − p)n−r = np n−1 X r=0 (n − 1)! r!(n − 1 − r)! pr (1 − p)n−1−r = np n−1 X r=0 n − 1 r pr (1 − p)n−1−r = np. 28
  • 49. 7.4 Function of a random variable Composition of f : R → R and X defines a new random variable f(X) given by f(X)(ω) = f(X(ω)). Example 7.3. If a, b and c are constants, then a + bX and (X − c)2 are random variables defined by (a + bX)(ω) = a + bX(ω) and (X − c)2 (ω) = (X(ω) − c)2 . 7.5 Properties of expectation Theorem 7.4. 1. If X ≥ 0 then E [X] ≥ 0. 2. If X ≥ 0 and E [X] = 0 then P (X = 0) = 1. 3. If a and b are constants then E [a + bX] = a + bE [X]. 4. For any random variables X, Y then E [X + Y ] = E [X] + E [Y ]. Properties 3 and 4 show that E is a linear operator. 5. E [X] is the constant which minimizes E h (X − c) 2 i . Proof. 1. X ≥ 0 means X(ω) ≥ 0 for all ω ∈ Ω. So E [X] = X ω∈Ω pωX(ω) ≥ 0. 2. If there exists ω ∈ Ω with pω 0 and X(ω) 0 then E [X] 0, therefore P (X = 0) = 1. 3. E [a + bX] = X ω∈Ω (a + bX(ω)) pω = a X ω∈Ω pω + b X ω∈Ω pωX(ω) = a + bE [X] . 4. X ω p(ω)[X(ω) + Y (ω)] = X ω p(ω)X(ω) + X ω p(ω)Y (ω)]. 5. E (X − c)2 = E h (X − E [X] + E [X] − c)2 i = E h (X − E [X])2 + 2(X − E [X])(E [X] − c) + (E [X] − c)2 i = E (X − E [X])2 + 2E X − E [X] (E [X] − c) + (E [X] − c)2 = E (X − E [X])2 + (E [X] − c)2 . This is clearly minimized when c = E [X]. 29
  • 50. 8 Further functions of random variables Expectation of sum is sum of expectations. Variance. Variance of binomial, Poisson and geometric random variables. Indicator random variable. Reproof of inclusion-exclusion formula using indicator functions. *Zipf’s law*. 8.1 Expectation of sum is sum of expectations Henceforth, random variables are assumed to be real-valued whenever the context makes clear that this is required. It is worth repeating Theorem 7.4, 4. This fact is very useful. Theorem 8.1. For any random variables X1, X2, . . . , Xn, for which all the following expectations exist, E n X i=1 Xi # = n X i=1 E [Xi] . Proof. X ω p(ω) h X1(ω) + · · · + Xn(ω) i = X ω p(ω)X1(ω) + · · · + X ω p(ω)Xn(ω). 8.2 Variance The variance of a random variable X is defined as Var X = E h (X − E[X])2 i , (which we below show = E X2 − E [X] 2 ). The standard deviation is √ Var X. Theorem 8.2 (Properties of variance). (i) Var X ≥ 0. If Var X = 0, then P (X = E [X]) = 1. Proof. From Theorem 7.4, properties 1 and 2. (ii) If a, b are constants, Var (a + bX) = b2 Var X. Proof. Var(a + bX) = E (a + bX − a − bE [X])2 = b2 E (X − E [X])2 = b2 Var X. (iii) Var X = E X2 − E [X] 2 . 30
  • 51. Proof. E h (X − E[X])2 i = E X2 − 2XE [X] + (E [X])2 = E X2 − 2E [X] E [X] + E [X] 2 = E X2 − E [X] 2 Binomial. If X ∼ B(n, p) then Var(X) = np(1 − p). E[X(X − 1)] = n X r=0 r(r − 1) n! r!(n − r)! pr (1 − p)n−r = n(n − 1)p2 n X r=2 n − 2 r − 2 pr−2 (1 − p)(n−2)−(r−2) = n(n − 1)p2 . Hence Var(X) = n(n − 1)p2 + np − (np)2 = np(1 − p). Poisson. If X ∼ P(λ) then Var(X) = λ (from the binomial, by letting p → 0, np → λ.) See also proof in Lecture 12. Geometric. If X has the geometric distribution P (X = r) = pqr with r = 0, 1, · · · and p + q = 1, then E [X] = q/p and Var X = q/p2 . E [X] = ∞ X r=0 rpqr = pq ∞ X r=0 rqr−1 = pq ∞ X r=0 d dq (qr ) = pq d dq 1 1 − q = pq(1 − q)−2 = q p . The r.v. Y = X + 1 with the ‘shifted-geometric distribution’ has E[Y ] = 1/p. E X2 = ∞ X r=0 r2 pqr = pq ∞ X r=1 r(r + 1)qr−1 − ∞ X r=1 rqr−1 ! = pq 2 (1 − q)3 − 1 (1 − q)2 = 2q p2 − q p Var X = E X2 − E [X] 2 = 2q p2 − q p − q2 p2 = q p2 . Also, Var Y = q/p2 , since adding a constant does not change the variance. 31
  • 52. 8.3 Indicator random variables The indicator function I[A] of an event A ⊂ Ω is the function I[A](w) = ( 1, if ω ∈ A; 0, if ω / ∈ A. (8.1) I[A] is a random variable. It may also be written IA. It has the following properties. 1. E [I[A]] = P ω∈Ω pωI[A](w) = P (A). 2. I[Ac ] = 1 − I[A]. 3. I[A ∩ B] = I[A]I[B]. 4. I[A ∪ B] = I[A] + I[B] − I[A]I[B]. Proof. I[A ∪ B](ω) = 1 if ω ∈ A or ω ∈ B I[A ∪ B](ω) = I[A](ω) + I[B](ω) − I[A]I[B](ω) Example 8.3. Suppose n ≥ 2 couples are seated at random around a table with men and women alternating. Let N be the number of husbands seated next to their wives. Calculate E [N] and the Var(N). Let Ai = {couple i are together}. N = n X i=1 I[Ai] E [N] = E n X i=1 I[Ai] # = n X i=1 E I[Ai] = n X i=1 2 n = n 2 n = 2 E N2 = E   n X i=1 I[Ai] !2   = E   n X i=1 I[Ai]2 + 2 X ij I[Ai]I[Aj]   = nE[I[Ai]2 ] + n(n − 1)E (I[A1]I[A2]) E I[Ai]2 = E [I[Ai]] = 2 n E [(I[A1]I[A2])] = E [I[A1 ∩ A2]] = P (A1 ∩ A2) = P (A1) P (A2 | A1) = 2 n 1 n − 1 1 n − 1 + n − 2 n − 1 2 n − 1 Var N = E N2 − E [N] 2 = n 2 n + n(n − 1) 2 n 2n − 3 (n − 1)2 − 22 = 2(n − 2) n − 1 . 32
  • 53. 8.4 Reproof of inclusion-exclusion formula Proof. Let Ij be an indicator variable for the event Aj. Let Sr = X i1i2···ir Ii1 Ii2 · · · Iir sr = ESr = X i1i2···ir P(Ai1 ∩ · · · ∩ Air ). Then 1 − Qn j=1(1 − Ij) = S1 − S2 + · · · + (−1)n−1 Sn P Sn j=1 Aj = E h 1 − Qn j=1(1 − Ij) i = s1 − s2 + · · · + (−1)n−1 sn. 8.5 Zipf’s law (Not examinable.) The most common word in English is the, which occurs about one- tenth of the time in a typical text; the next most common word is of, which occurs about one-twentieth of the time; and so forth. It appears that words occur in frequencies proportional to their ranks. The following table is from Darwin’s Origin of Species. This rule, called Zipf’s Law, has also been found to apply in such widely varying places as the wealth of individuals, the size of cities, and the amount of traffic on webservers. Suppose we have a social network of n people and the incremental value that a person obtains from other people being part of a network varies as Zipf’s Law predicts. So the total value that one person obtains is proportional to 1 + 1/2 + · · · + 1/(n − 1) ≈ log n. Since there are n people, the total value of the social network is n log n. This is empirically a better estimate of network value than Metcalfe’s Law, which posits that the value of the network grows as n2 because each of n people can connect with n − 1 others. It has been suggested that the misapplication of Metcalfe’s Law was a contributor to the inflated pricing of Facebook shares. 33
  • 54. 9 Independent random variables Independence of random variables and properties. Variance of a sum. Efron’s dice. *Cycle lengths in a random permutation*. *Names in boxes problem*. 9.1 Independent random variables Discrete random variables X1, . . . , Xn are independent if and only if for any x1, . . . , xn P (X1 = x1, X2 = x2, . . . , Xn = xn) = n Y i=1 P (Xi = xi) . Theorem 9.1 (Preservation of independence). If X1, . . . , Xn are independent random variables and f1, f2 . . . , fn are functions R → R then f1(X1), . . . , fn(Xn) are indepen- dent random variables. Proof. P(f1(X1) = y1, . . . , fn(Xn) = yn) = X x1:f1(x1)=y1 · · xn:fn(xn)=yn P (X1 = x1, . . . , Xn = xn) = n Y i=1 X xi:fi(xi)=yi P (Xi = xi) = n Y i=1 P (fi(Xi) = yi) . Theorem 9.2 (Expectation of a product). If X1, . . . , Xn are independent random variables all of whose expectations exist then: E n Y i=1 Xi # = n Y i=1 E [Xi] . Proof. Write Ri for RXi (or ΩXi ), the range of Xi. E n Y i=1 Xi # = X x1∈R1 · · · X xn∈Rn x1 · · · xnP (X1 = x1, X2 = x2, . . . , Xn = xn) = n Y i=1 X xi∈Ri xiP (Xi = xi) ! = n Y i=1 E [Xi] . Notes. (i) In Theorem 8.1 we had E[ Pn i=1 Xi] = Pn i=1 EXi without requiring independence. (ii) In general, Theorems 8.1 and 9.2 are not true if n is replaced by ∞. 34
  • 55. Theorem 9.3. If X1, . . . , Xn are independent random variables, f1, . . . , fn are func- tions R → R, and {E [fi(Xi)]}i all exist, then: E n Y i=1 fi(Xi) # = n Y i=1 E [fi(Xi)] . Proof. This follow from the previous two theorems. 9.2 Variance of a sum Theorem 9.4. If X1, . . . , Xn are independent random variables then: Var n X i=1 Xi ! = n X i=1 Var Xi. Proof. In fact, we only need pairwise independence. Var n X i=1 Xi ! = E   n X i=1 Xi !2   − E n X i=1 Xi !2 = E   X i X2 i + X i6=j XiXj   − X i E[Xi] !2 = X i E X2 i + X i6=j E [XiXj] − X i E [Xi] 2 − X i6=j E [Xi] E [Xj] = X i E X2 i − E [Xi] 2 = n X i=1 Var Xi. Corollary 9.5. If X1, . . . , Xn are independent identically distributed random variables then Var 1 n n X i=1 Xi ! = 1 n Var Xi. Proof. Var 1 n n X i=1 Xi ! = 1 n2 Var X Xi = 1 n2 n X i=1 Var Xi = 1 n Var Xi. 35
  • 56. Example 9.6. If X1, . . . , Xn are independent, identically distributed (i.i.d.) Bernoulli random variables, ∼ B(1, p), then Y = X1 + · · · + Xn is a binomial random variable, ∼ B(n, p). Since Var(Xi) = EX2 i − (EXi)2 = p − p2 = p(1 − p), we have Var(Y ) = np(1 − p). Example 9.7 [Experimental Design]. Two rods of unknown lengths a, b. A rule can measure the length but with error having 0 mean (unbiased) and variance σ2 . Errors are independent from measurement to measurement. To estimate a, b we could take separate measurements A, B of each rod. E [A] = a Var A = σ2 , E [B] = b Var B = σ2 Can we do better using two measurements? Yes! Measure a + b as X and a − b as Y E [X] = a + b, Var X = σ2 E [Y ] = a − b, Var Y = σ2 E X + Y 2 = a, Var X + Y 2 = 1 2 σ2 E X − Y 2 = b, Var X − Y 2 = 1 2 σ2 So this is better. 9.3 Efron’s dice Example 9.8 [Efron’s dice]. Consider nonstandard dice: If each of the dice is rolled with respective outcomes A, B, C and D then P(A B) = P(B C) = P(C D) = P(D A) = 2 3 . It is good to appreciate that such non-transitivity can happen. Of course we can define other ordering relations between random variables that are transitive. The ordering defined by X ≥E Y iff EX ≥ EY , is called expectation ordering. The ordering defined by X ≥ st Y iff P(X ≥ t) ≥ P(Y ≥ t) for all t is called stochastic ordering. We say more about this in §17.3. 36
  • 57. 9.4 Cycle lengths in a random permutation Any permutation of 1, 2, . . . , n can be decomposed into cycles. For example, if (1,2,3,4) is permuted to (3,2,1,4) this is decomposed as (3,1) (2) and (4). It is the composition of one 2-cycle and two 1-cycles. • What is the probability that a given element lies in a cycle of length m (an m-cycle)? Answer: n − 1 n · n − 2 n − 1 · · · n − m + 1 n − m + 2 · 1 n − m + 1 = 1 n . • What is the expected number of m-cycles? Let Ii be an indicator for the event that i is in an m-cycle. Answer: 1 m E Pn i=1 Ii = 1 m n 1 n = 1 m . • Suppose m n/2. Let pm be the probability that an m-cycle exists. Since there can be at most one cycle of size m n/2, pm · 1 + (1 − pm) · 0 = E(number of m-cycles) = 1 m =⇒ pm = 1 m . Hence the probability of some large cycle of size m n/2 is n X m:mn/2 pm ≤ 1 dn/2e + · · · + 1 n ≈ log 2 = 0.6931. Names in boxes problem. Names of 100 prisoners are placed in 100 wooden boxes, one name to a box, and the boxes are lined up on a table in a room. One by one, the prisoners enter the room; each may look in at most 50 boxes, but must leave the room exactly as he found it and is permitted no further communication with the others. The prisoners may plot their strategy in advance, and they are going to need it, because unless every prisoner finds his own name all will subsequently be executed. Find a strategy with which their probability of success exceeds 0.30. Answer: The prisoners should use the following strategy. Prisoner i should start by looking in box i. If he finds the name of prisoner i1 he should next look in box i1. He continues in this manner looking through a sequence of boxes i, i1, i2, . . . , i49. His own name is contained in the box which points to the box where he started, namely i, so he will find his own name iff (in the random permutation of names in boxes) his name lies in a cycle of length ≤ 50. Every prisoners will find his name in a cycle of length ≤ 50 provided there is no large cycle. This happens with probability of 1 − 0.6931 0.30. 37
  • 58. 10 Inequalities Jensen’s, AM–GM and Cauchy-Schwarz inequalities. Covariance. X, Y independent =⇒ Cov(X, Y ) = 0, but not conversely. *Information entropy*. 10.1 Jensen’s inequality A function f : (a, b) → R is convex if for all x1, x2 ∈ (a, b) and λ1 ≥ 0, λ2 ≥ 0 with λ1 + λ2 = 1, λ1f(x1) + λ2f(x2) ≥ f(λ1x1 + λ2x2). It is strictly convex if strict inequality holds when x1 6= x2 and 0 λ1 1. x1 x2 λ1x1 + λ2x2 f(λ1x1 + λ2x2) λ1f(x1) + λ2f(x2) chord lies above the function. A function f is concave (strictly concave) if −f is convex (strictly convex). Fact. If f is a twice differentiable function and f00 (x) ≥ 0 for all x ∈ (a, b) then f is convex [exercise in Analysis I]. It is strictly convex if f00 (x) 0 for all x ∈ (a, b). Theorem 10.1 (Jensen’s inequality). Let f : (a, b) → R be a convex function. Then n X i=1 pif(xi) ≥ f n X i=1 pixi ! for all x1, . . . , xn ∈ (a, b) and p1, . . . , pn ∈ (0, 1) such that P i pi = 1. Furthermore if f is strictly convex then equality holds iff all the xi are equal. Jensen’s inequality is saying that if X takes finitely many values then E[f(X)] ≥ f(E[X]). 38
  • 59. Proof. Use induction. The case n = 2 is the definition of convexity. Suppose that the theorem is true for n − 1. Let p = (p1, . . . , pn) be a distribution (i.e. pi ≥ 0 for all i and P i pi = 1). The inductive step that proves the theorem is true for n is f(p1x1 + · · · + pnxn) = f p1x1 + (p2 + · · · + pn) p2x2 + · · · + pnxn p2 + · · · + pn ≤ p1f(x1) + (p2 + · · · + pn)f p2x2 + · · · + pnxn p2 + · · · + pn ≤ p1f(x1) + (p2 + · · · + pn) n X i=2 pi p2 + · · · + pn f(xi) = n X i=1 pif(xi). 10.2 AM–GM inequality Corollary 10.2 (AM–GM inequality). Given positive real numbers x1, . . . , xn, n Y i=1 xi !1/n ≤ 1 n n X i=1 xi. (10.1) Proof. The function f(x) = − log x is convex. Consider a random variable X such that P(X = xi) = 1/n, i = 1, . . . , n. By using Jensen’s inequality, (10.1) follows because Ef(X) ≥ f(EX) =⇒ 1 n X i − log xi ≥ − log 1 n X i xi ! . 10.3 Cauchy-Schwarz inequality Theorem 10.3. For any random variables X and Y , E[XY ]2 ≤ E[X2 ]E[Y 2 ]. Proof. Suppose EY 2 0 (else Y = 0). Let W = X − Y E[XY ]/E[Y 2 ]. E[W2 ] = E[X2 ] − 2 E[XY ]2 E[Y 2] + E[XY ]2 E[Y 2] ≥ 0, from which the Cauchy-Schwarz inequality follows. Equality occurs only if W = 0. 39
  • 60. From Paintings, Plane Tilings, Proofs, R. B. Nelsen Lewis 10.4 Covariance and correlation For two random variable X and Y , we define the covariance between X and Y as Cov(X, Y ) = E[(X − EX)(Y − EY )]. Properties of covariance (easy to prove, so proofs omitted) are: • If c is a constant, · Cov(X, c) = 0, · Cov(X + c, Y ) = Cov(X, Y ). • Cov(X, Y ) = Cov(Y, X). • Cov(X, Y ) = EXY − EXEY . • Cov(X + Z, Y ) = Cov(X, Y ) + Cov(Z, Y ). • Cov(X, X) = Var(X). • Var(X + Y ) = Var(X) + Var(Y ) + 2 Cov(X, Y ). • If X and Y are independent then Cov(X, Y ) = 0. However, as the following example shows, the converse is not true. 40
  • 61. Example 10.4. Suppose that (X, Y ) is equally likely to take three possible values (2, 0), (−1, 1), (−1, −1) Then EX = EY = 0 and EXY = 0, so Cov(X, Y ) = 0. But X = 2 ⇐⇒ Y = 0, so X and Y are not independent. The correlation coefficient (or just the correlation) between random variables X and Y with Var(X) 0 and Var(Y ) 0 is Corr(X, Y ) = Cov(X, Y ) p Var(X) Var(Y ) . Corollary 10.5. | Corr(X, Y )| ≤ 1. Proof. Apply Cauchy-Schwarz to X − EX and Y − EY . 10.5 Information entropy Suppose an event A occurs with probability P(A) = p. How surprising is it? Let’s try to invent a ‘surprise function’, say S(p). What properties should this have? Since a certain event is unsurprising we would like S(1) = 0 . We should also like S(p) to be decreasing and continuous in p. If A and B are independent events then we should like S(P(A ∩ B)) = S(P(A)) + S(P(B)). It turns out that the only function with these properties is one of the form S(p) = −c loga p, with c 0. Take c = 1, a = 2. If X is a random variable that takes values 1, . . . , n with probabilities p1, . . . , pn then on average the surprise obtained on learning X is H(X) = ES(pX) = − X i pi log2 pi. This is the information entropy of X. It is an important quantity in information theory. The ‘log’ can be taken to any base, but using base 2, nH(X) is roughly the expected number of binary bits required to report the result of n experiments in which X1, . . . , Xn are i.i.d. observations from distribution (pi, 1 ≤ i ≤ n) and we encode our reporting of the results of experiments in the most efficient way. Let’s use Jensen’s inequality to prove the entropy is maximized by p1 = · · · = pn = 1/n. Consider f(x) = − log x, which is a convex function. We may assume pi 0 for all i. Let X be a r.v. such that Xi = 1/pi with probability pi. Then − n X i=1 pi log pi = −Ef(X) ≤ −f(EX) = −f(n) = log n = − n X i=1 1 n log 1 n . 41
  • 62. 11 Weak law of large numbers Markov and Chebyshev inequalities. Weak law of large numbers. *Weierstrass approx- imation theorem*. *Benford’s law*. 11.1 Markov inequality Theorem 11.1. If X is a random variable with E|X| ∞ and a 0, then P(|X| ≥ a) ≤ E|X| a . Proof. I[{|X| ≥ a}] ≤ |X|/a (as the left-hand side is 0 or 1, and if 1 then the right-hand side is at least 1). So P(|X| ≥ a) = E h I[{|X| ≥ a}] i ≤ E h |X|/a i = E|X| a . 11.2 Chebyshev inequality Theorem 11.2. If X is a random variable with EX2 ∞ and 0, then P(|X| ≥ ) ≤ E[X2 ] 2 . Proof. Similarly to the proof of the Markov inequality, I[{|X| ≥ }] ≤ X2 2 . Take expected value. 1. The result is “distribution free” because no assumption need be made about the distribution of X (other than EX2 ∞). 2. It is the “best possible” inequality, in the sense that for some X the inequality becomes an equality. Take X = −, 0, and , with probabilities c/(22 ), 1 − c/2 and c/(22 ), respectively. Then EX2 = c P(|X| ≥ ) = c 2 = EX2 2 . 3. If µ = EX then applying the inequality to X − µ gives P(|X − µ| ≥ ) ≤ Var X 2 . 42
  • 63. 11.3 Weak law of large numbers Theorem 11.3 (WLLN). Let X1, X2, . . . be a sequence of independent identically dis- tributed (i.i.d.) random variables with mean µ and variance σ2 ∞. Let Sn = n X i=1 Xi. Then For all 0, P
  • 71. ≥ → 0 as n → ∞. We write this as Sn n →p µ, which reads as ‘Sn/n tends in probability to µ’. Proof. By Chebyshev’s inequality P
  • 79. ≥ ≤ E(Sn n − µ)2 2 = E(Sn − nµ)2 n22 (properties of expectation) = Var Sn n22 (since ESn = nµ) = nσ2 n22 (since Var Sn = nσ2 ) = σ2 n2 → 0. Remark. We cannot relax the requirement that X1, X2, . . . be independent. For example, we could not take X1 = X2 = · · · , where X1 is equally likely to be 0 or 1. Example 11.4. Repeatedly toss a coin that comes up heads with probability p. Let Ai be the event that the ith toss is a head. Let Xi = I[Ai]. Then Sn n = number of heads number of trials . Now µ = E[I[Ai]] = P(Ai) = p, so the WLLN states that P
  • 87. ≥ → 0 as n → ∞, which recovers the intuitive (or frequentist) interpretation of probability. 43
  • 88. Strong law of large numbers Why is do we use the word ‘weak’? Because there is also a ‘strong’ form of a law of large numbers, which is P Sn n → µ as n → ∞ = 1. This is not the same as the weak form. What does this mean? The idea is that ω ∈ Ω determines Sn n , n = 1, 2, . . . as a sequence of real numbers. Hence it either tends to µ or it does not. P ω : Sn(ω) n → µ as n → ∞ = 1. We write this as Sn n →a.s. µ, which is read as ‘Sn/n tends almost surely to µ’. 11.4 Probabilistic proof of Weierstrass approximation theorem Theorem 11.5 (not examinable). If f is a continuous real-valued function on the interval [0, 1] and 0, then there exists a polynomial function p such that |p(x) − f(x)| for all x ∈ [0, 1]. Proof. From Analysis I: A continuous function on [0, 1] is bounded. So assume, WLOG, |f(x)| ≤ 1. From Analysis II: A continuous function on [0, 1] is uniformly continuous. This means that there exists δ1, δ2, . . . such that if x, y ∈ [0, 1] and |x − y| δm then |f(x) − f(y)| 1/m. We define the so-called Bernstein polynomials: bk,n(x) = n k xk (1 − x)n−k , 0 ≤ k ≤ n. Then take pn(x) = Pn k=0 f(k/n)bk,n(x). Fix an x ∈ [0, 1] and let X be a binomial random variable with distribution B(n, x). Notice that pn(x) = E[f(X/n)]. Let A be the event {|f(X/n) − f(x)| ≥ 1/m}. Then
  • 94. =
  • 100. ≤ (1/m)P(Ac ) + E h |f(X/n) − f(x)|
  • 103. A i P(A) ≤ (1/m) + 2P(A). 44
  • 104. By using Chebyshev’s inequality and the fact that A ⊆ {|X/n − x| ≥ δm}, P(A) ≤ P(|X/n − x| ≥ δm) ≤ x(1 − x) nδ2 m ≤ 1 4nδ2 m Now choose m and n large enough so that 1 m + 1 2nδ2 m and we have
  • 110. . We have shown this for all x ∈ [0, 1]. 11.5 Benford’s law A set of numbers satisfies Benford’s law if the probability that a number begins with the digit k is log10 k+1 k . This is true, for example, of the Fibonacci num- bers: {Fn} = {1, 1, 2, 3, 5, 8, . . . }. Let Ak(n) be the number of the first n Fibonacci numbers that begin with a k. See the table for n = 10000. The fit is extremely good. k Ak(10000) log10 k+1 k 1 3011 0.30103 2 1762 0.17609 3 1250 0.12494 4 968 0.09691 5 792 0.07918 6 668 0.06695 7 580 0.05799 8 513 0.05115 9 456 0.04576 ‘Explanation’. Let α = 1 2 (1 + √ 5). If is well-known that when n is large, Fn ≈ αn . So Fn and αn have the same first digit. A number m begins with the digit k if the fractional part of log10 m lies in the interval [log10 k, log10(k + 1)). Let ]x[= x − bxc denote the fractional part of x. A famous theorem of Weyl states the following: If β is irrational, then the sequence of fractional parts {]nβ[}∞ n=1 is uniformly distributed. This result is certainly very plausible, but a proper proof is beyond our scope. We apply this with β = log10 α, noting that the fractional part of log10 Fn is then ]nβ[. Benford’s law also arises when one is concerned with numbers whose measurement scale is arbitrary. For example, whether we are measuring the areas of world lakes in km2 or miles2 the distribution of the first digit should surely be the same. The distribution of the first digit of X is determined by the distribution of the fractional part of log10 X. Given a constant c, the distribution of the first digit of cX is determined by the distribution of the fractional part of log10 cX = log10 X + log10 c. The uniform distribution is the only distribution on [0, 1] that does not change when a constant is added to it (mod 1). So if we are to have scale invariance then the fractional part of log10 X must be uniformly distributed, and so must lie in [0, 0.3010] with probability 0.3010. 45
  • 111. 12 Probability generating functions Distribution uniquely determined by p.g.f. Abel’s lemma. The p.g.f. of a sum of random variables. Tilings. *Dyck words*. 12.1 Probability generating function Consider a random variable X, taking values 0, 1, 2, . . . . Let pr = P (X = r), r = 0, 1, 2, . . . . The probability generating function (p.g.f.) of X, or of the distribution (pr, r = 0, 1, 2, . . . ), is p(z) = E zX = ∞ X r=0 P (X = r) zr = ∞ X r=0 przr . Thus p(z) is a polynomial or a power series. As a power series it is convergent for |z| ≤ 1, by comparison with a geometric series, and |p(z)| ≤ X r pr |z| r ≤ X r pr = 1. We can write pX(z) when we wish to give a reminder that this is the p.g.f. of X. Example 12.1 [A die]. pr = 1 6 , r = 1, . . . , 6, p(z) = E zX = 1 6 z + z2 + · · · + z6 = 1 6 z 1 − z6 1 − z . Theorem 12.2. The distribution of X is uniquely determined by the p.g.f. p(z). Proof. We find p0 from p0 = p(0). We know that we can differentiate p(z) term by term for |z| ≤ 1. Thus p0 (z) = p1 + 2p2z + 3p3z2 + · · · p0 (0) = p1. Repeated differentiation gives di dzi p(z) = p(i) (z) = ∞ X r=i r! (r − i)! przr−i and so p(i) (0) = i!pi. Thus we can recover p0, p1, . . . from p(z). Theorem 12.3 (Abel’s Lemma). E [X] = lim z→1 p0 (z). 46
  • 112. Proof. First prove ‘≥’. For 0 ≤ z ≤ 1, p0 (z) is a nondecreasing function of z, and p0 (z) = ∞ X r=1 rprzr−1 ≤ ∞ X r=1 rpr = E [X] , so p0 (z) is bounded above. Hence limz→1 p0 (z) ≤ E [X]. Now prove ‘≤’. Choose ≥ 0. Let N be large enough that PN r=1 rpr ≥ E [X]−. Then E [X] − ≤ N X r=1 rpr = lim z→1 N X r=1 rprzr−1 ≤ lim z→1 ∞ X r=1 rprzr−1 = lim z→1 p0 (z). Since this is true for all ≥ 0, we have E [X] ≤ limz→1 p0 (z). Usually, p0 (z) is continuous at z = 1, and then E [X] = p0 (1). Similarly we have the following. Theorem 12.4. E[X(X − 1)] = lim z→1 p00 (z). Proof. Proof is the same as Abel’s Lemma but with p00 (z) = ∞ X r=2 r(r − 1)pzr−2 . Example 12.5. Suppose X has the Poisson distribution with parameter λ. P (X = r) = λr r! e−λ , r = 0, 1, . . . . Then its p.g.f. is E zX = ∞ X r=0 zr λr r! e−λ = e−λz e−λ = e−λ(1−z) . To calculate the mean and variance of X: p0 (z) = λe−λ(1−z) , p00 (z) = λ2 e−λ(1−z) . So E [X] = lim z→1 p0 (z) = p0 (1) = λ (since p0 (z) continuous at z = 1) E [X(X − 1)] = p00 (1) = λ2 Var X = E X2 − E [X] 2 = E [X(X − 1)] + E [X] − E [X] 2 = λ2 + λ − λ2 = λ. 47
  • 113. Theorem 12.6. Suppose that X1, X2, . . . , Xn are independent random variables with p.g.fs p1(z), p2(z), . . . , pn(z). Then the p.g.f. of X1 + X2 + · · · + Xn is p1(z)p2(z) · · · pn(z). Proof. E h zX1+X2+···+Xn i = E h zX1 zX2 · · · zXn i = E zX1 E zX2 · · · E zXn = p1(z)p2(z) · · · pn(z). Example 12.7. Suppose X has a binomial distribution, B(n, p). Then E zX = n X r=0 P(X = r)zr = n X r=0 n r pr (1 − p)n−r zr = (1 − p + pz)n . This proves that X has the same distribution as Y1 +Y2 +· · ·+Yn, where Y1, Y2, . . . , Yn are i.i.d. Bernoulli random variables, each with P (Yi = 0) = q = 1 − p, P (Yi = 1) = p, E zYi = (1 − p + pz). Note. Whenever the p.g.f. factorizes it is useful to look to see if the random variable can be written as a sum of other (independent) random variables. Example 12.8. If X and Y are independently Poisson distributed with parameters λ and µ then: E zX+Y = E zX E zY = e−λ(1−z) e−µ(1−z) = e−(λ+µ)(1−z) , which is the p.g.f. of a Poisson random variable with parameter λ + µ. Since p.g.fs are 1–1 with distributions, X + Y is Poisson distributed with parameter λ + µ. 12.2 Combinatorial applications Generating functions are useful in many other realms. Tilings. How many ways can we tile a (2 × n) bathroom with (2 × 1) tiles? Say fn, where fn = fn−1 + fn−2 f0 = f1 = 1. 48
  • 114. Let F(z) = ∞ X n=0 fnzn . fnzn = fn−1zn + fn−2zn =⇒ P∞ n=2 fnzn = P∞ n=2 fn−1zn + P∞ n=2 fn−2zn and so, since f0 = f1 = 1, F(z) − f0 − zf1 = z(F(z) − f0) + z2 F(z) F(z)(1 − z − z2 ) = f0(1 − z) + zf1 = 1 − z + z = 1. Thus F(z) = (1 − z − z2 )−1 . Let α1 = 1 2 (1 + √ 5) α2 = 1 2 (1 − √ 5), F(z) = 1 (1 − α1z)(1 − α2z) = 1 α1 − α2 α1 (1 − α1z) − α2 (1 − α2z) = 1 α1 − α2 (α1 P∞ n=0 αn 1 zn − α2 P∞ n=0 αn 2 zn ) . The coefficient of zn , that is fn, is the Fibonacci number fn = 1 α1 − α2 (αn+1 1 − αn+1 2 ). Dyck words. There are 5 Dyck words of length 6: ()()(), (())(), ()(()), ((())), (()()). In general, a Dyck word of length 2n is a balanced string of n ‘(’ and n ‘)’. Let Cn be the number of Dyck words of length 2n. What is this? In general, w = (w1)w2, where w, w1, w2 are Dyck words. So Cn+1 = Pn i=0 CiCn−i, taking C0 = 1. Let c(x) = P∞ n=0 Cnxn . Then c(x) = 1 + xc(x)2 . So c(x) = 1 − √ 1 − 4x 2x = ∞ X n=0 2n n xn n + 1 . Cn = 1 n+1 2n n is the nth Catalan number. It is the number of Dyck words of length 2n, and also has many applications in combinatorial problems. It is the number of paths from (0, 0) to (2n, 0) that are always nonnegative, i.e. such that there are always at least as many ups as downs (heads as tails). We will make use of this result in a later discussion of random matrices in §24.3. The first Catalan numbers for n = 0, 1, 2, 3, . . . are 1, 1, 2, 5, 14, 42, 132, 429, . . . . 49
  • 115. 13 Conditional expectation Conditional distributions. Joint distribution. Conditional expectation and its proper- ties. Marginals. The p.g.f. for the sum of a random number of terms. *Aggregate loss and value at risk*. *Conditional entropy*. 13.1 Conditional distribution and expectation Let X and Y be random variables (in general, not independent) with joint distribu- tion P (X = x, Y = y) . Then the distribution of X is P (X = x) = X y∈ΩY P (X = x, Y = y) . This is called the marginal distribution for X. Assuming P(Y = y) 0, the conditional distribution for X given by Y = y is P (X = x | Y = y) = P (X = x, Y = y) P (Y = y) . The conditional expectation of X given Y = y is, E [X | Y = y] = X x∈ΩX xP (X = x | Y = y) . We can also think of E [X | Y ] as the random variable defined by E [X | Y ] (ω) = E [X | Y = Y (ω)] . Thus E [X | Y ] : Ω → ΩX, (or : Ω → R if X is real-valued). Example 13.1. Let X1, X2, . . . , Xn be i.i.d. random variables, with Xi ∼ B(1, p), and Y = X1 + X2 + · · · + Xn. Then P (X1 = 1 | Y = r) = P (X1 = 1, Y = r) P (Y = r) = P (X1 = 1, X2 + · · · + Xn = r − 1) P (Y = r) = P (X1 = 1) P (X2 + · · · + Xn = r − 1) P (Y = r) = p · n−1 r−1 pr−1 (1 − p)n−r n r pr(1 − p)n−r = n−1 r−1 n r = r n . 50
  • 116. So E [X1 | Y = r] = 0 × P (X1 = 0 | Y = r) + 1 × P (X1 = 1 | Y = r) = r n E [X1 | Y = Y (ω)] = 1 n Y (ω) and therefore E [X1 | Y ] = 1 n Y, which is a random variable, i.e. a function of Y . 13.2 Properties of conditional expectation Theorem 13.2. If X and Y are independent then E [X | Y ] = E [X] . Proof. If X and Y are independent then for any y ∈ ΩY E [X | Y = y] = X x∈ΩX xP (X = x | Y = y) = X x∈ΩX xP (X = x) = E [X] . Theorem 13.3 (tower property of conditional expectation). For any two random variables, X and Y , E E [X | Y ] = E [X] . Proof. E E [X | Y ] = X y P (Y = y) E [X | Y = y] = X y P (Y = y) X x xP (X = x | Y = y) = X y X x xP (X = x, Y = y) = E [X] . This is also called the law of total expectation. As a special case: if A1, . . . , An is a partition of the sample space, then E[X] = P i:P (Ai)0 E[X | Ai]P(Ai). 13.3 Sums with a random number of terms Example 13.4. Let X1, X2, . . . be i.i.d. with p.g.f. p(z). Let N be a random variable independent of X1, X2, . . . with p.g.f. h(z). We now find the p.g.f. of SN = X1 + X2 + · · · + XN . 51
  • 117. E zX1+···+XN = E h E zX1+···+XN | N i = ∞ X n=0 P (N = n) E zX1+···+XN | N = n = ∞ X n=0 P (N = n) (p(z))n = h(p(z)). Then for example E [X1 + · · · + XN ] = d dz h(p(z))
  • 121. z=1 = h0 (1)p0 (1) = E [N] E [X1] . Similarly, we can calculate d2 dz2 h(p(z)) and hence Var(X1 +· · ·+XN ) in terms of Var(N) and Var(X1). This gives Var(SN ) = E[N] Var(X1) + E[X1]2 Var(N). 13.4 Aggregate loss distribution and VaR A quantity of interest to an actuary is the aggregate loss distribution for a portfolio of insured risks and its value at risk (VaR). Suppose that the number of claims that will be made against a portfolio during a year is K, which is Poisson distributed with mean λ, and the severity of loss due to each claim in independent and has p.g.f. of p(z). The aggregate loss is SK = X1 + · · · + XK. The VaRα at α = 0.995 is the value of x such that P(SK ≥ x) = 0.005. Aggregate loss of x or more occurs only 1 year in 200. Now pK(z) = exp(λ(z−1)) and so E[zSK ] = exp(λ(p(z)−1)). From this we can recover P(SK = x), x = 0, 1, . . . , and hence P(SK ≥ x), from which we can calculate the VaR. In practice it is more convenient to use the numerical method of fast Fourier transform and the characteristic function, i.e. the p.g.f. with z = eiθ . Suppose Xi takes values {0, 1, . . . , N − 1} with probabilities p0, . . . , pN−1. Let ω = e−i(2π/N) and p∗ k = p(ωk ) = PN−1 j=0 pje−i 2πk N j , k = 0, . . . , N − 1. For example, suppose Xi is uniform on {0, 1, 2, 3}. Then we obtain the aggregate distribution of X1 + X2 (with range {0, 1, . . . , 6}) by the Mathematica code below. We start by padding out the distribution of X1 so that there will be room to accom- modate X1 + X2 taking values up to 6. In squaring ps, we are calculating ((p∗ 0)2 , . . . , (p∗ 6)2 ), i.e. p(z)2 (the p.g.f. of X1 + X2) for each z ∈ {ω0 , ω1 , ω2 , . . . , ω6 }, where ω = e−i(2π/7) . From these 7 values we can 52
  • 122. recover the 7 probabilities: P(X1 + X2 = r), r = 0, . . . , 6. For larger problems, the fast Fourier transform method is much quicker than taking powers of the p.g.f. directly. See Appendix B for more details. The VaR measure is widely used in finance, but is controversial and has drawbacks. 13.5 Conditional entropy Suppose that X and Y are not independent. Intuitively, we think that knowledge of Y will reduce the potential surprise (entropy) inherent in X. To show this, suppose P(X = ai, Y = bj) = pij. Denote the marginals as P(X = ai) = P j pij = αi. P(Y = bj) = P i pij = βj. The conditional probability of X given Y = bj is P(X = ai | Y = bj) = pij/βj. Conditional on knowing Y = bj, H(X | Y = bj) = − X i pij βj log pij βj . Now average this over values of Y to get H(X | Y ) = − X j βj X i pij βj log pij βj = − X i,j pij log pij βj . Now we show that, on average, knowledge of Y reduces entropy. This is because H(X) − H(X | Y ) = − X i,j pij log αi + X ij pij log pij βj = − X i,j pij log αiβj pij ≥ − log   X i,j pij αiβj pij   = 0, where in the final line we use Jensen’s inequality. This is only true on average. It is possible that H(X | Y = bj) H(X) for some j. For example, when playing Cluedo (or conducting a murder investigation), information may be obtained that increases one’s uncertainty about who committed the crime. 53
  • 123. 14 Branching processes Branching processes. Generating functions. Probability of extinction. 14.1 Branching processes Branching processes are used to model population growth due to reproduction. Con- sider a sequence of random variables X0, X1 . . . , where Xn is the number of individuals in the nth generation of a population. X0 = 1 X1 = 3 X2 = 6 X3 = 7 1 Assume the following. 1. X0 = 1. 2. Each individual lives for unit time, and then on death produces k offspring with probability fk, P k fk = 1. 3. All offspring behave independently. Xn+1 = Y n 1 + Y n 2 + · · · + Y n Xn , where Y n i are i.i.d. and Y n i denotes the number of offspring of the ith member of generation n. 14.2 Generating function of a branching process Let F(z) be the probability generating function of Y n i . F(z) = E h zY n i i = E zX1 = ∞ X k=0 fkzk . Define Fn(z) = E zXn . Then F1(z) = F(z) the probability generating function of the offspring distribution. Theorem 14.1. Fn+1(z) = Fn(F(z)) = F(F(· · · (F(z)) . . . )) = F(Fn(z)). 54
  • 124. Proof. Fn+1(z) = E zXn+1 = E E zXn+1 | Xn = ∞ X k=0 P (Xn = k) E zXn+1 | Xn = k = ∞ X k=0 P (Xn = k) E h zY n 1 +Y n 2 +···+Y n k i = ∞ X k=0 P (Xn = k) E h zY n 1 i · · · E h zY n k i = ∞ X k=0 P (Xn = k) (F(z)) k = Fn(F(z)) = F(F(· · · (F(z)) . . . )) = F(Fn(z)). Theorem 14.2 (mean and variance of population size). Let EX1 = µ = ∞ X k=0 kfk ∞ Var(X1) = σ2 = ∞ X k=0 (k − µ)2 fk ∞. Then E [Xn] = µn and Var Xn =    σ2 µn−1 (µn − 1) µ − 1 , µ 6= 1 nσ2 , µ = 1. (14.1) Proof. Prove by calculating F0 n(z), F00 n (z). Alternatively E [Xn] = E [E [Xn | Xn−1]] (using tower property) = E [ µXn−1] = µE [Xn−1] = µn (by induction) E (Xn − µXn−1)2 = E E (Xn − µXn−1)2 | Xn−1 = E [Var (Xn | Xn−1)] = E σ2 Xn−1 = σ2 µn−1 . 55
  • 125. Thus E X2 n − 2µE [XnXn−1] + µ2 E X2 n−1 = σ2 µn−1 . Now calculate E [XnXn−1] = E [E [XnXn−1 | Xn−1]] = E [Xn−1E [Xn | Xn−1]] = E [Xn−1µXn−1] = µE X2 n−1 . So E X2 n = σ2 µn−1 + µ2 E X2 n−1 , and Var Xn = E X2 n − E [Xn] 2 = µ2 E X2 n−1 + σ2 µn−1 − µ2 E [Xn−1] 2 = µ2 Var Xn−1 + σ2 µn−1 = µ4 Var Xn−2 + σ2 (µn−1 + µn ) = µ2(n−1) Var X1 + σ2 (µn−1 + µn + · · · + µ2n−3 ) = σ2 µn−1 (1 + µ + · · · + µn−1 ). 14.3 Probability of extinction To deal with extinction we must be careful with limits as n → ∞. Let An = [Xn = 0] (event that extinction occurs by generation n), A = ∞ [ n=1 An (event that extinction ever occurs). We have A1 ⊂ A2 ⊂ · · · . Let q be the extinction probability. q = P (extinction ever occurs) = P(A) = lim n→∞ P (An) = lim n→∞ P (Xn = 0) Then, since P(Xn = 0) = Fn(0), F(q) = F lim n→∞ Fn(0) = lim n→∞ F (Fn(0)) (since F is continuous, a result from Analysis) = lim n→∞ Fn+1(0), and thus F(q) = q. Alternatively, using the law of total probability, q = X k P (X1 = k) P (extinction | X1 = k) = X P (X1 = k) qk = F(q). 56
  • 126. Theorem 14.3. The probability of extinction, q, is the smallest positive root of the equation F(q) = q. Suppose µ is the mean of the offspring distribution. Then If µ ≤ 1 then q = 1, while if µ 1 then q 1. Proof. F(1) = 1, µ = ∞ X k=0 kfk = lim z→1 F0 (z) F00 (z) = ∞ X k=2 k(k − 1)zk−2 . Assume f0 0, f0 + f1 1. Then F(0) = f0 0 and F0 (0) = f1 1. So we have the following pictures in which F(z) is convex. F(z) µ 1 µ ≤ 1 f0 f0 z z q 0 0 0 0 1 1 1 1 Thus if µ ≤ 1, there does not exists a q ∈ (0, 1) with F(q) = q. If µ 1 then let α be the smallest positive root of F(z) = z then α ≤ 1. Further, F(0) ≤ F(α) = α (since F is increasing) =⇒ F(F(0)) ≤ α =⇒ Fn(0) ≤ α, for all n ≥ 1. So q = lim n→∞ Fn(0) ≤ α =⇒ q = α (since q is a root of F(z) = z). 57
  • 127. 15 Random walk and gambler’s ruin Random walks. Gambler’s ruin. Duration of the game. Use of generating functions in random walk. 15.1 Random walks Let X1, X2, . . . be i.i.d. r.vs such that Xn = ( +1, with probability p, −1, with probability q = 1 − p. Let Sn = S0 + X1 + X2 + · · · + Xn where usually S0 = 0. Then (Sn, n = 0, 1, 2, . . . ) is a 1-dimensional random walk. S0 S1 S2 0 1 2 If p = q = 1 2 then we have a symmetric random walk. 15.2 Gambler’s ruin Example 15.1. A gambler starts with an initial fortune of £z, z a and plays a game in which at successive goes he wins or loses £1 with probabilities p and q, respectively. What is the probability he is bankrupt before reaching a? This is a random walk starting at z which stops when it hits 0 or a. Let pz = P(the random walk hits a before it hits 0 | start from z), qz = P(the random walk hits 0 before it hits a | start from z). a z 0 58
  • 128. After the first step the gambler’s fortune is either z + 1 or z − 1, with probabilities p and q respectively. From the law of total probability pz = qpz−1 + ppz+1, 0 z a. Also p0 = 0 and pa = 1. We now solve pt2 − t + q = 0. (pt − q)(t − 1) = 0 =⇒ t = 1 or q/p. The general solution for p 6= q is pz = A + B (q/p) z and so with the boundary conditions we get pz = 1 − (q/p) z 1 − (q/p) a . If p = q, the general solution is A + Bz and so pz = z/a. To calculate qz, observe that this is the same problem with p, q, z replaced by q, p, a−z respectively. Thus P(hits 0 before a) = qz =      (q/p) z − (q/p) a 1 − (q/p) a , if p 6= q a − z a , if p = q. Thus qz + pz = 1 and so on, as we would expect, the game ends with probability one. What happens as a → ∞? P(path hits 0 ever) = P ∞ [ a=z+1 {path hits 0 before it hits a} ! = lim a→∞ P (path hits 0 before it hits a) = lim a→∞ qz = ( (q/p)z , p q 1, p ≤ q. (15.1) Let G be the ultimate gain or loss. G = ( a − z, with probability pz −z, with probability qz. E [G] = ( apz − z, if p 6= q 0, if p = q. Notice that a fair game remains fair: if the coin is fair (p = q) then games based on it have expected reward 0. 59
  • 129. 15.3 Duration of the game Let Dz be the expected time until the random walk hits 0 or a, starting from z. Dz is finite, because Dz/a is bounded above by 1/(pa +qa ); this is the mean of geometric random variables (number of windows of size a needed until obtaining a window with all +1s or −1s). Consider the first step. By the law of total probability Dz = E[duration] = E E [duration | X1] = p E [duration | X1 = 1] + q E [duration | X1 = −1] = p(1 + Dz+1) + q(1 + Dz−1) = 1 + pDz+1 + qDz−1. This holds for 0 z a with D0 = Da = 0. Let’s try for a particular solution Dz = Cz. Cz = 1 + pC(z + 1) + qC(z − 1) =⇒ C = 1 q − p for p 6= q. Consider the homogeneous relation pt2 − t + q = 0, with roots 1 and q/p. If p 6= q the general solution is Dz = A + B (q/p) z + z q − p . Substitute z = 0 and z = a to get A and B, hence Dz = z q − p − a q − p 1 − (q/p) z 1 − (q/p) a , p 6= q. If p = q then a particular solution is −z2 . General solution Dz = −z2 + A + Bz. Substituting the boundary conditions given, Dz = z(a − z), p = q. Example 15.2. Initial capital is z and we wish to reach a before 0. p q z a P (ruin) E [gain] E [duration] 0.5 0.5 90 100 0.100 0 900.00 0.45 0.55 9 10 0.210 −1.101 11.01 0.45 0.55 90 100 0.866 −76.556 765.56 0.45 0.55 900 1000 ≈ 1 −900 9000 60
  • 130. 15.4 Use of generating functions in random walk Let’s stop the random walk when it hits 0 or a, giving absorption at 0 or a. Let Uz,n = P (r.w. absorbed at 0 at n | starts at z) . So U0,0 = 1, Uz,0 = 0, 0 z ≤ a, U0,n = Ua,n = 0, n 0. Consider the generating function Uz(s) = ∞ X n=0 Uz,nsn . Take the recurrence: Uz,n+1 = pUz+1,n + qUz−1,n, 0 ≤ z ≤ a, n ≥ 0, multiply by sn+1 and sum over n = 0, 1, 2 . . . , to obtain Uz(s) = psUz+1(s) + qsUz−1(s), where U0(s) = 1 and Ua(s) = 0. We look for a solution of the form Uz(s) = λ(s)z , which must satisfy λ(s) = psλ(s)2 + qs. There are two roots: λ1(s), λ2(s) = 1 ± p 1 − 4pqs2 2ps . Every solution is of the form Uz(s) = A(s)λ1(s)z + B(s)λ2(s)z . Substitute U0(s) = 1 and Ua(s) = 0 to find A(s) + B(s) = 1 and A(s)λ1(s)a + B(s)λ2(s)a = 0. Uz(s) = λ1(s)a λ2(s)z − λ1(s)z λ2(s)a λ1(s)a − λ2(s)a . But λ1(s)λ2(s) = q/p so Uz(s) = (q/p)z λ1(s)a−z − λ2(s)a−z λ1(s)a − λ2(s)a . Clearly Uz(1) = qz. The same method will find the generating function for absorption probabilities at a, say Vz(s). The generating function for the duration of the game is the sum of these two generating functions. So Dz = U0 z(1) + V 0 z (1). 61
  • 131. 16 Continuous random variables Continuous random variables. Density function. Distribution function. Uniform distribution. Exponential distribution, and its memoryless property. *Hazard rate*. Relations among probability distributions. 16.1 Continuous random variables Thus far, we have been considering experiments in which the set of possible outcomes, Ω, is finite or countable. Now we permit a continuum of possible outcomes. For example, we might spin a pointer and let ω ∈ Ω give the angular position at which it stops, with Ω = {ω : 0 ≤ ω ≤ 2π}. We wish to define a probability measure P on some subsets of Ω. A sensible choice of P for a subset [0, θ] is P (ω ∈ [0, θ]) = θ 2π , 0 ≤ θ ≤ 2π. Definition 16.1. A continuous random variable X is a real-valued function X : Ω → R for which P (a ≤ X(ω) ≤ b) = Z b a f(x) dx, where f is a function satisfying 1. f(x) ≥ 0, 2. R +∞ −∞ f(x) dx = 1. The function f is called the probability density function (p.d.f.). For example, if X(ω) = ω is the position at which the pointer stops then X is a continuous random variable with p.d.f. f(x) =    1 2π , 0 ≤ x ≤ 2π 0, otherwise. Here X is a uniformly distributed random variable; we write X ∼ U[0, 2π]. Intuition about probability density functions and their uses can be obtained from the approximate relation: P (X ∈ [x, x + δx]) = Z x+δx x f(z) dz ≈ f(x) δx. However, remember that f(x) is not a probability. Indeed, P(X = x) = 0 for x ∈ R. So by Axiom III we must conclude P(X ∈ A) = 0 if A is any countable subset of R. 62
  • 132. The cumulative distribution function (c.d.f.) (or just distribution function) of a random variable X (discrete, continuous or otherwise), is defined as F(x) = P (X ≤ x) . F(x) is increasing in x and tends to 1. 1 2 3 4 5 6 0.2 0.4 0.6 0.8 1.0 x F(x) .0 c.d.f. of F(x) = 1 − e−x for exponential r.v. E (1) If X is a continuous random variable then F(x) = Z x −∞ f(z) dz, and so F is continuous and differentiable. In fact, the name “continuous random variable” derives from the fact that F is contin- uous, (though this is actually a shortened form of “absolutely continuous”; the qualifier “absolutely” we leave undefined, but it equivalent to saying that a p.d.f. exists). We have F0 (x) = f(x) at any point x where the fundamental theorem of calculus applies. The distribution function is also defined for a discrete random variable, F(x) = X ω:X(ω)≤x pω in which case F is a step function. 1 2 3 4 5 6 0 1 x F(x) c.d.f. for X = number shown by rolling a fair die In either case P (a X ≤ b) = P (X ≤ b) − P (X ≤ a) = F(b) − F(a). 63
  • 133. Remark. There exist random variables that are neither discrete or continuous. For example, consider a r.v. with the c.d.f. F(x) =      x, 0 ≤ x ≤ 1/2, 1/2, 1/2 ≤ x 1, 1, x = 1. The sample space is not countable (so not a discrete r.v.), and there is no p.d.f. (so not a continuous r.v.). There is an atom at x = 1, as P(X = 1) = 1/2. Only discrete r.vs have a p.m.f. and only continuous r.vs have a p.d.f. All random variables have a c.d.f. 16.2 Uniform distribution The uniform distribution on [a, b] has the c.d.f., and corresponding p.d.f. F(x) = x − a b − a , f(x) = 1 b − a , a ≤ x ≤ b. If X has this distribution we write X ∼ U[a, b]. 16.3 Exponential distribution The exponential distribution with parameter λ has the c.d.f. and p.d.f. F(x) = 1 − e−λx , f(x) = λe−λx , 0 ≤ x ∞. If X has this distribution we write X ∼ E (λ). Note that X is nonnegative. An important fact about the exponential distribution it that it has the memoryless property. If X ∼ E (λ) then P (X ≥ x + z | X ≥ z) = P (X ≥ x + z) P (X ≥ z) = e−λ(x+z) e−λz = e−λx = P (X ≥ x) . If X were the life of something, such as ‘how long this phone call to my mother will last’, the memoryless property says that after we have been talking for 5 minutes, the distribution of the remaining duration of the call is just the same as it was at the start. This is close to what happens in real life. If you walk in Cambridge on a busy afternoon the distribution of the time until you next run into a friend is likely to be exponentially distributed. Can you explain why? The discrete distribution with the memoryless property is the geometric distribution. That is, for positive integers k and h, P (X ≥ k + h | X ≥ k) = P (X ≥ k + h) P (X ≥ k) = qk+h qk = qh = P(X ≥ h). 64
  • 134. 16.4 Hazard rate In modelling the lifetimes of components (or humans) a useful notion is the hazard rate, defined as h(x) = f(x) 1 − F(x) . Its meaning is explained by P(X ≤ x + δx | X x) = P(x X ≤ x + δx) P(X x) ≈ f(x) δx 1 − F(x) = h(x) δx. The distribution having constant h(x) = λ is the exponential distribution E (λ). In actuarial science the hazard rate of human life is known as the ‘force of mortality’. 16.5 Relationships among probability distributions We have met a large number of distributions: Bernoulli, binomial, Poisson, geometric, hypergeometric, Zipf, Benford, uniform, exponential. By the end of the course we will add gamma, Cauchy, beta, normal and multivariate normal. It is interesting and useful to know how distributions relate to one another. For example, the Poisson is the limit of the binomial as n → ∞ and np → λ. And if X and Y are i.i.d. exponential r.vs then X/(X + Y ) is uniformly distributed. At the right is a par- tial view of Univariate Distribution Relation- ships, by L. M. Leemis and J. T. McQueston, The American Statis- tician, 2008, 62:45–53. Click above to see the whole picture. I have highlighted the distributions we meet in this course. See also the Wikipedia page: Relationships among probability distributions. 65
  • 135. 17 Functions of a continuous random variable Distribution of a function of a random variable. Expectation and variance of a contin- uous random variable. Stochastic ordering. Inspection paradox. 17.1 Distribution of a function of a random variable Theorem 17.1. If X is a continuous random variable with p.d.f. f(x) and h(x) is a continuous strictly increasing function with h−1 (x) differentiable then Y = h(X) is a continuous random variable with p.d.f. fY (x) = f h−1 (x) d dx h−1 (x). Proof. The distribution function of Y = h(X) is P (h(X) ≤ x) = P X ≤ h−1 (x) = F h−1 (x) , since h is strictly increasing and F is the distribution function of X. Then d dx P (h(X) ≤ x) is the p.d.f., which is, as claimed, fY . Note. It is easier to repeat this proof when you need it than to remember the result. Example 17.2. Suppose X is uniformly distributed on [0, 1]. Consider Y = − log X. P (Y ≤ y) = P (− log X ≤ y) = P X ≥ e−y = Z 1 e−y 1 dx = 1 − e−y . This is F(y) for Y having the exponential distribution with parameter 1. More generally, we have the following. Theorem 17.3. Let U ∼ U[0, 1]. For any strictly increasing and continuous distri- bution function F, the random variable X defined by X = F−1 (U) has distribution function F. Proof. P (X ≤ x) = P F−1 (U) ≤ x = P (U ≤ F(x)) = F(x). 66
  • 136. Remarks. (i) This is true when F is not strictly increasing, provided we define F−1 (u) = inf{x : F(x) ≥ u, 0 u 1}. (ii) This can also be done (but a bit more messily) for discrete random variables: P (X = xi) = pi, i = 0, 1, . . . Let X = xj if j−1 X i=0 pi ≤ U j X i=0 pi, U ∼ U[0, 1]. This is useful when writing a computer simulation in which there are random events. 17.2 Expectation The expectation (or mean) of a continuous random variable X is E [X] = Z ∞ −∞ xf(x) dx provided not both of R ∞ 0 xf(x) dx and R 0 −∞ xf(x) dx are infinite. Theorem 17.4. If X is a continuous random variable then E [X] = Z ∞ 0 P (X ≥ x) dx − Z ∞ 0 P (X ≤ −x) dx. Proof. Z ∞ 0 P (X ≥ x) dx = Z ∞ 0 Z ∞ x f(y) dy dx = Z ∞ 0 Z ∞ 0 I[y ≥ x]f(y) dy dx = Z ∞ 0 Z y 0 dxf(y) dy = Z ∞ 0 yf(y) dy. Similarly, R ∞ 0 P (X ≤ −x) dx = − R 0 −∞ yf(y) dy. The result follows. In the case X ≥ 0 this is EX = R ∞ 0 (1 − F(x)) dx. Example 17.5. Suppose X ∼ E (λ). Then P(X ≥ x) = R ∞ x λe−λt dt = e−λx . Thus EX = R ∞ 0 e−λx dx = 1/λ. This also holds for discrete random variables and is often a very useful way to compute expectation, whether the random variable is discrete or continuous. 67
  • 137. If X takes values in the set {0, 1, . . . } the theorem states that E [X] = ∞ X n=0 P (X n) = ∞ X n=1 P (X ≥ n) ! . A direct proof of this is as follows: ∞ X n=0 P (X n) = ∞ X n=0 ∞ X m=0 I[m n]P (X = m) = ∞ X m=0 ∞ X n=0 I[m n] ! P (X = m) = ∞ X m=0 mP (X = m) = EX. This result is very useful and well worth remembering! 17.3 Stochastic ordering of random variables For two random variables X and Y , we say X is stochastically greater than Y and write X ≥st Y if P(X t) ≥ P(Y t) for all t. Using the result above. X ≥st Y =⇒ ∞ X k=0 P(X k) ≥ ∞ X k=0 P(Y k) =⇒ EX ≥ EY. So stochastic ordering implies expectation ordering. This is also true for continuous random variables. For example, suppose X and Y are exponential random variables with parameters 1/2 and 1. Then P(X t) = e−t/2 e−t = P(Y t). So X ≥st Y . Also EX = 2 1 = EY . 17.4 Variance The variance of a continuous random variable X is defined as for discrete r.vs, Var X = E (X − E [X])2 . The properties of expectation and variance are the same for discrete and continuous random variables; just replace P with R in the proofs. Example 17.6. Var X = E X2 − E [X] 2 = Z ∞ −∞ x2 f(x) dx − Z ∞ −∞ xf(x) dx 2 . Example 17.7. Suppose X ∼ U[a, b]. Then EX = Z b a x 1 b − a dx = 1 2 (a + b), Var X = Z b a x2 dx b − a dx − (EX)2 = 1 12 (b − a)2 . 68
  • 138. 17.5 Inspection paradox Suppose that n families have children attending a school. Family i has Xi children at the school, where X1, . . . , Xn are i.i.d. r.vs, with P(Xi = k) = pk, k = 1, . . . , m. The average family size is µ. A child is picked at random. What is the probability distribution of the size of the family from which she comes? Let J be the index of the family from which she comes. P(Xj = k | J = j) = P(J = j, Xj = k) P(J = j) = E   pk k k+ P i6=j Xi 1/n   which does not depend on j. Thus P(XJ = k) P(X1 = k) = E n 1 + P i6=j Xi/k # ≥ k n k + (n − 1)µ (by Jensen’s inequality). So P(XJ = k) P(X1 = k) is increasing in k and greater than 1 for k µ. Using the fact that a A ≤ a+b A+B when a A ≤ b B (any a, b, A, B 0) Pm k=1 P(XJ = k) Pm k=1 P(X1 = k) = 1 1 =⇒ Pm k=2 P(XJ = k) Pm k=2 P(X1 = k) ≥ 1 . . . =⇒ Pm k=i P(XJ = k) Pm k=i P(X1 = k) ≥ 1 and hence P(XJ ≥ i) ≥ P(X1 ≥ i) for all i. So we have proved that XJ is stochastic greater than X1. From §17.3, this implies that the means are also ordered: EXJ ≥ EX1 = µ. The fact that the family size of the randomly chosen student tends to be greater that that of a normal family is known as the inspection paradox. One also sees this when buses have an average interarrival interval of µ minutes. Unless the interarrival time is exactly µ minutes, the average waiting of a randomly arriving person will be more than µ/2 minutes. This is because she is more likely to arrive in a large inter-bus gap than in a small one. In fact, it can be shown that the average wait will be (µ2 + σ2 )/(2µ), where σ2 is the variance of the interarrival time. Coda. What do you think of this claim? ‘Girls have more brothers than boys do.’ ‘Proof’. Condition on the family having b boys and g girls. Each girl has b brothers; each boy has b − 1 brothers. Now take expected value over all possible b, g. 69
  • 139. 18 Jointly distributed random variables Jointly distributed random variables. Use of pictures when working with uniform dis- tributions. Geometric probability. Bertrand’s paradox. Buffon’s needle. 18.1 Jointly distributed random variables For two random variables X and Y the joint distribution function is F(x, y) = P (X ≤ x, Y ≤ y) , F : R2 → [0, 1]. The marginal distribution of X is FX(x) = P (X ≤ x) = P (X ≤ x, Y ∞) = F(x, ∞) = lim y→∞ F(x, y). Similarly, FY (x) = F(∞, y). We say that X1, X2, . . . , Xn are jointly distributed continuous random variables, and have joint probability density function f, if for any set A ⊆ Rn P (X1, X2, . . . , Xn) ∈ A = ZZ . . . Z (x1,...,xn)∈A f(x1, . . . , xn) dx1 . . . dxn, and f satisfies the obvious conditions: f(x1, . . . , xn) ≥ 0, ZZ . . . Z Rn f(x1, . . . , xn) dx1 . . . dxn = 1. Example 18.1. The joint p.d.f. when n = 2 can be found from the joint distribution. F(x, y) = P (X ≤ x, Y ≤ y) = Z x −∞ Z y −∞ f(u, v) du dv and so f(x, y) = ∂2 F(x, y) ∂x ∂y , provided this is defined at (x, y). Theorem 18.2. If X and Y are jointly continuous random variables then they are individually continuous random variables. 70
  • 140. Proof. Since X and Y are jointly continuous random variables P (X ∈ A) = P (X ∈ A, Y ∈ (−∞, +∞)) = Z x∈A Z ∞ −∞ f(x, y) dx dy = Z x∈A fX(x) dx, where fX(x) = R ∞ −∞ f(x, y) dy is the p.d.f. of X. 18.2 Independence of continuous random variables The notion of independence is defined in a similar manner as it is defined for discrete random variables. Continuous random variables X1, . . . , Xn are independent if P(X1 ∈ A1, X2 ∈ A2, . . . , Xn ∈ An) = P(X1 ∈ A1)P(X2 ∈ A2) · · · P(Xn ∈ An) for all Ai ⊆ ΩXi , i = 1, . . . , n. [Note. Each Ai is assumed measurable, i.e. we can compute P(Xi ∈ Ai) by using the probability axioms and the fact that for any interval, P(Xi ∈ [a, b]) = R b a f(x) dx.] Let FXi and fXi be the c.d.f. and p.d.f. of Xi. Independence is equivalent to the statement that for all x1, . . . , xn the joint distribution function factors into the product of the marginal distribution functions: F(x1, . . . , xn) = FX1 (x1) · · · FXn (xn). It is also equivalent to the statement that the joint p.d.f. factors into the product of the marginal densities: f(x1, . . . , xn) = fX1 (x1) · · · fXn (xn). Theorem 9.2 stated that for independent discrete random variables E [ Qn i=1 Xi] = Qn i=1 E[Xi]. By the above, we see that this also holds for independent continuous random variables. Similarly, it is true that Var( P i Xi) = P i Var(Xi). 18.3 Geometric probability The following is an example of what is called geometric probability. Outcomes can be visualized and their probabilities found with the aid of a picture. Example 18.3. Two points X and Y are chosen at random and independently along a line segment of length L. What is the probability that: |X − Y | ≤ ` ? Suppose that “at random” means uniformly so that f(x, y) = 1 L2 , x, y ∈ [0, L]2 . 71
  • 141. ℓ ℓ L L 0 0 A The desired probability is = ZZ A f(x, y) dx dy = area of A L2 = L2 − (L − `)2 L2 = 2L` − `2 L2 . We consider below two further examples of geometric probability. Others are ‘A stick in broken at random in two places. What is the probability that the three pieces can form a triangle?’. See also the infamous Planet Zog tripos questions in 2003, 2004. 18.4 Bertrand’s paradox Example 18.4 [Bertrand’s Paradox]. Posed by Bertrand in 1889: What is the prob- ability that a “random chord” of a circle has length greater than the length of the side of an inscribed equalateral triangle? The ‘paradox’ is that there are at least 3 interpretations of what it means for a chord to be chosen ‘at random’. (1) (2) (3) (1) The endpoints are independently and uniformly distributed over the circumference. The chord is longer than a side of the triangle if the other chord endpoint lies on the arc between the endpoints of the triangle side opposite the first point. The length of the arc is one third of the circumference of the circle. So answer = 1 3 . (2) The chord is perpendicular to a given radius, intersecting at a point chosen uniformly over the radius. The chord is longer than a side of the triangle if the chosen point is 72
  • 142. nearer the center of the circle than the point where the side of the triangle intersects the radius. Since the side of the triangle bisects the radius: answer = 1 2 . (3) The midpoint of the chord is chosen uniformly within the circle. The chord is longer than a side of the inscribed triangle if the chosen point falls within a concentric circle of radius 1/2 the radius of the larger circle. As the smaller circle has 1/4 the area of the large circle: answer = 1 4 . Suppose we throw long sticks from a large distance onto a circle drawn on the ground. To which of (1)–(3) does this correspond? What is the ‘best’ answer to Bertrand’s question? 18.5 Buffon’s needle Example 18.5. A needle of length ` is tossed at random onto a floor marked with parallel lines a distance L apart, where ` ≤ L. What is the probability of the event A =[the needle intersects one of the parallel lines]? Answer. Let Θ ∈ [0, π] be the angle between the needle and the parallel lines and let X be the distance from the bottom of the needle to the line above it. ℓ L X Θ It is reasonable to suppose that independently X ∼ U[0, L], Θ ∼ U[0, π). Thus f(x, θ) = 1 L 1 π , 0 ≤ x ≤ L and 0 ≤ θ ≤ π. The needle intersects the line if and only if X ≤ ` sin Θ. So p = P(A) = Z π 0 ` sin θ L 1 π dθ = 2` πL . Suppose we drop a needle n times and it hits the line N times. We might estimate p by p̂ = N/n, and π by π̂ = (2`)/(p̂L). In §23.3 we consider how large n must be so that the estimate of π is good, in the sense P(|π̂ − π| 0.001) ≥ 0.95. 73
  • 143. 19 Normal distribution Normal distribution, Mean, mode and median. Order statistics and their distributions. *Stochastic bin packing*. 19.1 Normal distribution The normal distribution (or Gaussian distribution) with parameters µ and σ2 has p.d.f. f(x) = 1 √ 2πσ e − (x − µ)2 2σ2 , −∞ x ∞. To indicate that X has this distribution we write X ∼ N(µ, σ2 ). The standard normal distribution means N(0, 1) and its c.d.f. is usually denoted by Φ(x) = R x −∞ (1/ √ 2π)e−x2 /2 dx, and Φ̄(x) = 1 − Φ(x). Bell-shaped p.d.fs of N(0, 1), N(1, 1) and N(0, 2): -6 -4 -2 2 4 6 0.1 0.2 0.3 0.4 Example 19.1. We need to check that Z ∞ −∞ f(x) dx = 1. We make the substitution z = (x − µ)/σ. So dx/σ = dz, and then I = 1 √ 2πσ Z ∞ −∞ e− (x−µ)2 2σ2 dx = 1 √ 2π Z ∞ −∞ e− 1 2 z2 dz. You are probably familiar with how to calculate this. Look at I2 = 1 2π Z ∞ −∞ e− 1 2 x2 dx Z ∞ −∞ e− 1 2 y2 dy = 1 2π Z ∞ −∞ Z ∞ −∞ e− 1 2 (x2 +y2 ) dx dy = 1 2π Z 2π 0 Z ∞ 0 re− 1 2 r2 dr dθ = 1 2π Z 2π 0 dθ = 1. Therefore I = 1. 74
  • 144. The expectation is E [X] = 1 √ 2πσ Z ∞ −∞ xe− (x−µ)2 2σ2 dx = 1 √ 2πσ Z ∞ −∞ (x − µ)e− (x−µ)2 2σ2 dx + 1 √ 2πσ Z ∞ −∞ µe− (x−µ)2 2σ2 dx. The first term is convergent and equals zero by symmetry, so that E [X] = 0 + µ = µ. Now let Z = (X − µ)/σ. So Var(X) = σ2 Var(Z). Using Theorem 17.1 we see that the density of Z is (2π)−1/2 exp(−z2 /2), and so Z ∼ N(0, 1). We have Var(Z) = E Z2 − E [Z] 2 . Now E [Z] = 0. So using integration by parts to find E Z2 , Var(Z) = 1 √ 2π Z ∞ −∞ z2 e− 1 2 z2 dz = h − 1 √ 2π ze− 1 2 z2 i∞ −∞ + 1 √ 2π Z ∞ −∞ e− 1 2 z2 dz = 0 + 1 = 1. Hence Var X = σ2 . 19.2 Calculations with the normal distribution Example 19.2 [The advantage of a small increase in the mean]. UK adult male heights are normally distributed with mean 7000 and standard deviation 300 . In the Netherlands these figures are 7100 and 300 . What is P(Y X), where X and Y are the heights of randomly chosen UK and Netherlands males, respectively? Answer. Sums of normal r.vs are normally distributed (proved in §22.2). So with X ∼ N(70, 9) and Y ∼ N(71, 9) we have Y − X ∼ N(1, 18). So P(Y − X 0) = Φ(1/ √ 18) = 0.5931. This is more than 1/2 but not hugely so. Now suppose that in both countries the Olympic male basketball teams are selected from that portion of men whose height is at least 400 above the mean (which corresponds to the 9.1% tallest males of the country). What is the probability that a randomly chosen Netherlands player is taller than a randomly chosen UK player? Answer. Now we want P(X Y | X ≥ 74, Y ≥ 75). Let φX and φY be the p.d.fs of N(70, 9) and N(71, 9) respectively. The answer is R 75 x=74 φX(x) dx R ∞ y=75 φY (y) dy + R ∞ x=75 R ∞ y=x φY (y) dy φX(x) dx R ∞ x=74 φX(x) dx R ∞ y=75 φY (y) dy = 0.7558 (computed numerically). 75
  • 145. The lesson is that if members of a population A are only slightly better in some activity than members of a population B, then members of A may nonetheless appear much more talented than members of B when one focuses upon the sub-populations of exceptional performers (such as 100m sprinters or Nobel prize winners). 19.3 Mode, median and sample mean Given a p.d.f. f(x), we say x̂ is a mode if f(x̂) ≥ f(x) for all x, and x̂ is a median if Z x̂ −∞ f(x) dx = Z ∞ x̂ f(x) dx = 1 2 . For a discrete random variable, x̂ is a median if P (X ≤ x̂) ≥ 1 2 and P (X ≥ x̂) ≥ 1 2 . For N(µ, σ2 ) the mean, mode and median are all equal to µ. If X1, . . . , Xn is a random sample from the distribution then the sample mean is X̄ = 1 n n X i=1 Xi. 19.4 Distribution of order statistics Let Y1, . . . , Yn be the values of X1, . . . , Xn arranged in increasing order, so Y1 ≤ · · · ≤ Yn. These are called the order statistics. Another common notation is X(i) = Yi, so that X(1) ≤ · · · ≤ X(n). The sample median is Yn+1 2 if n is odd or any value in Yn 2 , Yn 2 +1 if n is even. The largest is Yn = max{X1, . . . , Xn}. If X1, . . . , Xn are i.i.d. r.vs with c.d.f. F and p.d.f. f then, P (Yn ≤ y) = P (X1 ≤ y, . . . , Xn ≤ y) = F(y)n . Thus the p.d.f. of Yn is g(y) = d dy F(y)n = nF(y)n−1 f(y). Similarly, the smallest is Y1 = min{X1, . . . , Xn}, and P (Y1 ≤ y) = 1 − P (X1 ≥ y, . . . , Xn ≥ y) = 1 − (1 − F(y)) n . 76
  • 146. Thus the p.d.f. of Y1 is h(y) = n (1 − F(y)) n−1 f(y). What about the joint density of Y1, Yn? The joint c.d.f. is G(y1, yn) = P (Y1 ≤ y1, Yn ≤ yn) = P (Yn ≤ yn) − P (Yn ≤ yn, Y1 y1) = P (Yn ≤ yn) − P (y1 X1 ≤ yn, y1 X2 ≤ yn, . . . , y1 Xn ≤ yn) = F(yn)n − (F(yn) − F(y1)) n . Thus the joint p.d.f. of Y1, Yn is g(y1, yn) = ∂2 ∂y1∂yn G(y1, yn) = ( n(n − 1) (F(yn) − F(y1)) n−2 f(y1)f(yn), −∞ y1 ≤ yn ∞ 0, otherwise. To see this another way, think of 5 boxes corresponding to intervals (−∞, y1), [y1, y1+δ), [y1 +δ, yn), [yn, yn +δ), [yn +δ, ∞). We find the probability that the counts in the boxes are 0, 1, n − 2, 1, 0 by using the multinomial distribution, and the idea that a point is chosen in box [y1, y1 + δ) with probability f(y1)δ, and so on. See also Example 21.1. 19.5 Stochastic bin packing The weights of n items are X1, . . . , Xn, assumed i.i.d. U[0, 1]. Mary and John each have a bin (or suitcase) which can carry total weight 1. Mary likes to pack in her bin only the heaviest item. John likes to pack the items in order 1, 2, . . . , n, packing each item if it can fit in the space remaining. Whose suitcase is more likely to heavier? Answer. Let ZM and ZJ be the unused capacity in Mary’s and John’s bins, respectively. P(ZM t) = P(X1 ≤ 1 − t, . . . , Xn ≤ 1 − t) = (1 − t)n . Calculation for ZJ is trickier. Let Gk(x) be the probability that ZJ t given that when John is about to consider the last k items the remaining capacity of his bin is x, where x t. Clearly, G0(x) = 1. We shall prove inductively that Gk(x) = (1 − t)k . Assuming this is true at k, and letting X = Xn−k+1, an inductive step follows from Gk+1(x) = P(X x)Gk(x) + Z x−t 0 Gk(x − y) dy = (1 − t)k+1 . Thus, P(ZJ t) = Gn(1) = (1 − t)n = P(ZM t). So, surprisingly, ZJ =st ZM . 77
  • 147. 20 Transformations of random variables Transformation of random variables. Convolution. Cauchy distribution. 20.1 Transformation of random variables Suppose X1, X2, . . . , Xn have joint p.d.f. f. Let Y1 = r1(X1, X2, . . . , Xn) Y2 = r2(X1, X2, . . . , Xn) . . . Yn = rn(X1, X2, . . . , Xn). Let R ⊆ Rn be such that P ((X1, X2, . . . , Xn) ∈ R) = 1. Let S be the image of R under the above transformation. Suppose the transformation from R to S is 1–1 (bijective). Then there exist inverse functions X1 = s1(Y1, Y2, . . . , Yn) X2 = s2(Y1, Y2, . . . , Yn) . . . Xn = sn(Y1, Y2, . . . , Yn). The familiar bijection between rectangular and polar coordinates is shown below. 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.05 0.10 0.15 0.2 0.4 0.6 0.8 1.0 1.2 1.4 x y θ r (x, y) (x + dx, y + dy) (r, θ) (r + dr, θ + dθ) r = p x2 + y2 θ = tan−1 (y/x) x = r cos θ y = r sin θ 78
  • 148. Assume that ∂si/∂yj exists and is continuous at every point (y1, y2, . . . , yn) in S. Define the Jacobian determinant as J = ∂(s1, . . . , sn) ∂(y1, . . . , yn) = det       ∂s1 ∂y1 . . . ∂s1 ∂yn . . . ... . . . ∂sn ∂y1 . . . ∂sn ∂yn       . If A ⊆ R and B is the image of A then P ((X1, . . . , Xn) ∈ A) = Z · · · Z A f(x1, . . . , xn) dx1 . . . dxn (1) = Z · · · Z B f (s1, . . . , sn) |J| dy1 . . . dyn = P ((Y1, . . . , Yn) ∈ B) . (2) Transformation is 1–1 so (1), (2) are the same. Thus the density for Y1, . . . , Yn is g(y1, y2, . . . , yn) =        f (s1(y1, y2, . . . , yn), . . . , sn(y1, y2, . . . , yn)) |J| , if (y1, y2, . . . , yn) ∈ S 0, otherwise. See also Appendix C. Example 20.1 [density of products and quotients]. Suppose that (X, Y ) has density f(x, y) = ( 4xy, for 0 ≤ x ≤ 1, 0 y ≤ 1 0, otherwise. Let U = X/Y, V = XY, so X = √ UV . Y = p V/U, det ∂x/∂u ∂x/∂v ∂y/∂u ∂y/∂v = det 1 2 p v/u 1 2 p u/v −1 2 p v/u3 1 2 p 1/(uv) = 1 2u . Therefore |J| = 1 2u . It can sometimes be easier to work the other way and then invert: det ∂u/∂x ∂u/∂y ∂v/∂x ∂v/∂y = det 1/y −x/y2 y x = 2x/y = 2u. Therefore |J| = 1 2u . So taking S as the regious shown below, we have for (u, v) ∈ S, g(u, v) = 1 2u (4xy) = 1 2u × 4 √ uv r v u = 2v/u, and g(u, v) = 0 otherwise. 79
  • 149. 1.5 1 1 1 1 x y uu v uv = 1 u = v R S Notice that U and V are not independent since g(u, v) = 2(v/u) I[(u, v) ∈ S] is not the product of the two densities. When the transformations are linear things are simple. Let A be a n × n invertible matrix. Then    Y1 . . . Yn    = A    X1 . . . Xn    , |J| = det (A−1 ) = (det A)−1 . Thus the p.d.f. of (Y1, . . . , Yn) is g(y1, . . . , yn) = 1 det A f(A−1 y). 20.2 Convolution Example 20.2. Suppose X1, X2 have the p.d.f. f(x1, x2) and we wish to calculate the p.d.f. of X1 + X2. Let Y = X1 + X2 and Z = X2. Then X1 = Y − Z and X2 = Z. A = 1 1 0 1 , so |J| = 1/| det(A)| = 1. The joint distribution of Y and Z is g(y, z) = f(x1, x2) = f(y − z, z). The marginal density of Y is g(y) = Z ∞ −∞ f(y − z, z) dz, −∞ y ∞, = Z ∞ −∞ f(z, y − z) dz (by change of variable). 80
  • 150. If X1 and X2 are independent, with p.d.fs f1 and f2 then f(x1, x2) = f1(x1)f2(x2) =⇒ g(y) = Z ∞ −∞ f1(z)f2(y − z) dz. This is called the convolution of f1 and f2. 20.3 Cauchy distribution The Cauchy distribution has p.d.f. f(x) = 1 π(1 + x2) , −∞ x ∞. By making the substitution x = tan θ, we can check that this is a density, since Z ∞ −∞ 1 π(1 + x2) dx = Z π/2 −π/2 1 π dθ = 1. The Cauchy distribution is an example of a distribution having no mean, since Z ∞ −∞ x dx π(1 + x2) = Z ∞ 0 x dx π(1 + x2) + Z 0 −∞ x dx π(1 + x2) = ∞ − ∞. E[X] does not exist because both integrals are infinite. However, the second moment does exist. It is E[X2 ] = ∞. Suppose X and Y are independent and have the Cauchy distribution. To find the distribution of Z = X + Y we use convolution. fZ(z) = Z ∞ −∞ fX(x)fY (z − x) dx = Z ∞ −∞ 1 π2(1 + x2)(1 + (z − x)2) dx = 1/2 π(1 + (z/2)2) (by using partial fractions). We conclude that 1 2 Z also has the Cauchy distribution. Inductively, one can show that if X1, . . . , Xn are i.i.d. with the Cauchy distribution, then 1 n (X1 + · · · + Xn) also has the Cauchy distribution. This demonstrates that the central limit theorem does not hold when Xi has no mean. Facts. (i) If Θ ∼ U[−π/2, π/2] then X = tan Θ has the Cauchy distribution. (ii) If independently X, Y ∼ N(0, 1) then X/Y has the Cauchy distribution. 81
  • 151. 21 Moment generating functions Transformations that are not 1–1. Minimum of exponentials is exponential. Moment generating functions. Moment generating function of exponential distribution. Sum of i.i.d. exponential random variables and the gamma distribution. *Beta distribution*. 21.1 What happens if the mapping is not 1–1? What happens if the mapping is not 1–1? Suppose X has p.d.f. f. What is the p.d.f. of Y = |X|? Clearly Y ≥ 0, and so for 0 ≤ a b, P (|X| ∈ (a, b)) = Z b a (f(x) + f(−x)) dx =⇒ fY (x) = f(x) + f(−x). Example 21.1. Suppose X1, . . . , Xn are i.i.d. r.vs. What is the p.d.f. of the order statistics Y1, . . . , Yn? g(y1, . . . , yn) = ( n!f(y1) · · · f(yn), y1 ≤ y2 ≤ · · · ≤ yn 0, otherwise. The factor of n! appears because this is the number of x1, . . . , xn that could give rise to a specific y1, . . . , yn. 21.2 Minimum of exponentials is exponential Example 21.2. Suppose X1 and X2 are independent exponentially distributed r.vs, with parameters λ and µ. Let Y1 = min(X1, X2). Then P(Y1 ≥ t) = P(X1 ≥ t)P(X2 ≥ t) = e−λt e−µt = e−(λ+µ)t and so Y is exponentially distributed with parameter λ + µ. Example 21.3. Suppose X1, . . . , Xn are i.i.d. r.vs exponentially distributed with pa- rameter λ. Let Y1, . . . , Yn be the order statistics of X1, . . . , Xn, and Z1 = Y1 Z2 = Y2 − Y1 . . . Zn = Yn − Yn−1 82
  • 152. To find the distribution of the Zi we start by writing Z = AY , where A =         1 0 0 . . . 0 0 −1 1 0 . . . 0 0 0 −1 1 . . . 0 0 . . . . . . . . . . . . . . . . . . 0 0 . . . . . . −1 1         . Since det(A) = 1 we have h(z1, . . . , zn) = g(y1, . . . , yn) = n!f(y1) · · · f(yn) = n!λn e−λy1 . . . e−λyn = n!λn e−λ(y1+···+yn) = n!λn e−λ(nz1+(n−1)z2+···+zn) = Qn i=1(λi)e−(λi)zn+1−i . Thus h(z1, . . . , zn) is expressed as the product of n density functions and Zi ∼ exponential((n + 1 − i)λ), with Z1, . . . , Zn being independent. This should also be intuitively obvious. Think of 0, Y1, Y2, . . . , Yn as increasing times, separated by Z1, Z2, . . . , Zn. Clearly, Z1 = Y1 ∼ E (nλ) (since Y1 is the minimum of n i.i.d. exponential r.vs). Then, by the memoryless property of exponential r.vs, things continue after Y1 in the same way, but with only n−1 i.i.d. exponential r.vs remaining. 21.3 Moment generating functions The moment generating function (m.g.f.) of a random variable X is defined by m(θ) = E eθX for those θ such that m(θ) is finite. It is computed as m(θ) = Z ∞ −∞ eθx f(x) dx where f(x) is the p.d.f. of X. The m.g.f. is defined for any type of r.v. but is most commonly useful for continuous r.vs, whereas the p.g.f. is most commonly used for discrete r.vs. We will use the following theorem without proof. 83
  • 153. Theorem 21.4. The moment generating function determines the distribution of X, provided m(θ) is finite for all θ in some interval containing the origin. E [Xr ] is called the “rth moment of X”. Theorem 21.5. The rth moment of X is the coefficient of θr /r! in the power series expansion of m(θ), equivalently, the rth derivative evaluated at θ = 0, i.e. m(r) (0). Sketch of proof. eθX = 1 + θX + 1 2! θ2 X2 + · · · . So E eθX = 1 + θE [X] + 1 2! θ2 E X2 + · · · . Example 21.6. Let X be exponentially distributed with parameter λ. Its m.g.f. is E eθX = Z ∞ 0 eθx λe−λx dx = λ Z ∞ 0 e−(λ−θ)x dx = λ λ − θ , for θ ≤ λ. The first two moments are E [X] = m0 (0) = λ (λ − θ)2
  • 161. θ=0 = 2 λ2 . Thus Var X = E X2 − E [X] 2 = 2 λ2 − 1 λ2 = 1 λ2 . Theorem 21.7. If X and Y are independent random variables with moment generating functions mX(θ) and mY (θ) then X + Y has the moment generating function mX+Y (θ) = mX(θ) · mY (θ). Proof. E eθ(X+Y ) = E eθX eθY = E eθX E eθY = mX(θ)mY (θ). 21.4 Gamma distribution Example 21.8 [gamma distribution]. Suppose X1, . . . , Xn are i.i.d. r.vs each expo- nentially distributed with parameter λ. Let Sn = X1 + · · · + Xn. The m.g.f. of Sn is E h eθ(X1+···+Xn) i = E eθX1 · · · E eθXn = E eθX1 n = λ λ − θ n . 84
  • 162. The gamma distribution, denoted Γ(n, λ), with parameters n ∈ Z+ and λ 0, has density f(x) = λn xn−1 e−λx (n − 1)! , 0 ≤ x ∞. We can check that this is a density, by using integration by parts to show that R ∞ 0 f(x) dx = 1. Suppose that Y ∼ Γ(n, λ). Its m.g.f. is E eθY = Z ∞ 0 eθx λn xn−1 e−λx (n − 1)! dx = λ λ − θ n Z ∞ 0 (λ − θ)n xn−1 e−(λ−θ)x (n − 1)! dx = λ λ − θ n , since the final integral evaluates to 1 (the integrand being the p.d.f. of Γ(n, λ − θ)). We can conclude that Sn ∼ Γ(n, λ), since the moment generating function characterizes the distribution. The gamma distribution Γ(α, λ) is also defined for any α, λ 0. The denominator of (n − 1)! in the p.d.f. is replaced with the gamma function, Γ(α) = R ∞ 0 xα−1 e−x dx. The case in which α is a positive integer is also called the Erlang distribution. 21.5 Beta distribution Suppose that X1, . . . , Xn are i.i.d. U[0, 1]. Let Y1 ≤ Y2 ≤ · · · ≤ Yn be the order statistics. The p.d.f. of Yi is f(y) = n! (i − 1)!(n − i)! yi−1 (1 − y)n−i , 0 ≤ y ≤ 1. Do you see why? Notice that the leading factor is the multinomial coefficient n i−1,1,n−i . This is the beta distribution, denoted Yi ∼ Beta(i, n − i + 1), with mean i/(n + 1). More generally, for a, b 0, Beta(a, b) has p.d.f. f(x : a, b) = Γ(a + b) Γ(a)Γ(b) xa−1 (1 − x)b−1 , 0 ≤ x ≤ 1. The beta distribution is used by actuaries to model the loss of an insurance risk. There is some further discussion in Appendix D. 85
  • 163. 22 Multivariate normal distribution Moment generating function of a normal random variable. Sums and linear transforma- tions of a normal random variable. Bounds on tail probability of a normal distribution. Multivariate and bivariate normal distributions. Multivariate moment generating func- tions. 22.1 Moment generating function of normal distribution Example 22.1. The moment generating function of a normally distributed random variable, X ∼ N(µ, σ2 ), is found as follows. E eθX = Z ∞ −∞ eθx 1 √ 2πσ e− 1 2σ2 (x−µ)2 dx. Substitute z = (x − µ)/σ to obtain E eθX = Z ∞ −∞ eθ(µ+σz) 1 √ 2π e− 1 2 z2 dz = eθµ+ 1 2 θ2 σ2 Z ∞ −∞ 1 √ 2π e− 1 2 (z−θσ)2 dz = eµθ+ 1 2 θ2 σ2 . The final integral equals 1, as the integrand is the density of N(θσ, 1). 22.2 Functions of normal random variables Theorem 22.2. Suppose X, Y are independent r.vs, with X ∼ N(µ1, σ2 1) and Y ∼ N(µ2, σ2 2). Then 1. X + Y ∼ N(µ1 + µ2, σ2 1 + σ2 2), 2. aX ∼ N(aµ1, a2 σ2 1). Proof. 1. E h eθ(X+Y ) i = E eθX E eθY = e(µ1θ+ 1 2 σ2 1 θ2 ) e(µ2θ+ 1 2 σ2 2 θ2 ) = exp (µ1 + µ2)θ + 1 2 (σ2 1 + σ2 2)θ2 which is the moment generating function for N(µ1 + µ2, σ2 1 + σ2 2). 86
  • 164. 2. E h eθ(aX) i = E h e(θa)X i = eµ1(θa)+ 1 2 σ2 1 (θa)2 = exp (aµ1)θ + 1 2 a2 σ2 1θ2 , which is the moment generating function of N(aµ1, a2 σ2 1). 22.3 Bounds on tail probability of a normal distribution The tail probabilities of a normal distribution are often important quantities to evaluate or bound. We can bound the probability in the tail as follows. Suppose X ∼ N(0, 1), with density function φ(x) = e−x2 /2 / √ 2π. Then for x 0, P(X x) = 1 − Φ(x) Z ∞ x 1 + 1 t2 φ(t) dt = 1 x φ(x). For example, 1 − Φ(3) = 0.00135 and the bound above is 0.00148. By similar means one can show a lower bound, and hence log(1 − Φ(x)) ∼ −1 2 x2 . 22.4 Multivariate normal distribution Let X1, . . . , Xn be i.i.d. N(0, 1) random variables with joint density g(x1, . . . , xn). g(x1, . . . , xn) = n Y i=1 1 √ 2π e−1 2 x2 i = 1 (2π)n/2 e−1 2 Pn i=1 x2 i = 1 (2π)n/2 e−1 2 xT x. Here xT = (x1, . . . , xn)T is a row vector. Write X =      X1 X2 . . . Xn      and consider the vector r.v. Z = µ + AX, where A is an invertible n × n matrix, so X = A−1 (Z − µ). Here Z and µ are column vectors of n components. 87
  • 165. The density of Z is f(z1, . . . , zn) = 1 (2π)n/2 1 det A e−1 2 A−1 (z − µ) T A−1 (z − µ) = 1 (2π)n/2 |Σ| 1/2 e−1 2 (z − µ)T Σ−1 (z − µ) where Σ = AAT . This is the multivariate normal density (MVN), written as Z ∼ MV N(µ, Σ) (or just N(µ, Σ)). The vector µ is EZ. The covariance, Cov(Zi, Zj) = E [(Zi − µi)(Zj − µj)] is the (i, j) entry of E (Z − µ)(Z − µ)T = E (AX)(AX)T = AE XXT AT = AIAT = AAT = Σ (the covariance matrix). If the covariance matrix of the MVN distribution is diagonal, then the components of the random vector Z are independent since f(z1, . . . , zn) = n Y i=1 1 (2π) 1 2 σi e − 1 2 zi − µi σi !2 , where Σ =     σ2 1 0 . . . 0 0 σ2 2 . . . 0 . . . . . . ... . . . 0 0 . . . σ2 n    . This is a special property of the MVN. We already know that if the joint distribution of r.vs is not MVN then covariances of 0 do not, in general, imply independence. 22.5 Bivariate normal The bivariate normal random variable is the multivariate normal with n = 2, having covariance matrix: Σ = σ2 1 ρσ1σ2 ρσ1σ2 σ2 2 ! . E [Xi] = µi and Var(Xi) = σ2 i . Cov(X1, X2) = σ1σ2ρ. Corr(X1, X2) = Cov(X1, X2) σ1σ2 = ρ, Σ−1 = 1 1 − ρ2 σ−2 1 −ρσ−1 1 σ−1 2 −ρσ−1 1 σ−1 2 σ−2 2 ! . 88
  • 166. Plots of the joint p.d.fs of MV N 1 1 , 1 ρ ρ 1 , for ρ = 0 and ρ = 0.6. The joint distribution is written as f(x1, x2) = 1 2π(1 − ρ2) 1 2 σ1σ2 × exp − 1 2(1 − ρ2) x1 − µ1 σ1 2 − 2ρ x1 − µ1 σ1 x2 − µ2 σ2 + x2 − µ2 σ2 2 ## , where σ1, σ2 0 and −1 ≤ p ≤ 1. 22.6 Multivariate moment generating function For random variables X1, . . . , Xn and real numbers θ1, . . . , θn we define m(θ) = m(θ1, . . . , θn) = E h e(θ1X1+···+θnXn) i to be the joint moment generating function of the random variables. It is only defined for those θ for which m(θ) is finite. Its properties are similar to those of the moment generating function of a single random variable. The joint moment generating function of the bivariate normal is m(θ1, θ2) = exp θ1µ1 + θ2µ2 + 1 2 (θ2 1σ2 1 + 2θ1θ2ρσ1σ2 + θ2 2σ2 2) . 89
  • 167. 23 Central limit theorem Central limit theorem. Sketch of proof. Normal approximation to the binomial. Opin- ion polls. Buffon’s needle. 23.1 Central limit theorem Suppose X1, . . . , Xn are i.i.d. r.vs, mean µ and variance σ2 . Let Sn = X1 + · · · + Xn. We know that Var (Sn/ √ n) = Var Sn − nµ √ n = σ2 . Theorem 23.1. Let X1, . . . , Xn be i.i.d. r.vs with E [Xi] = µ and Var Xi = σ2 ∞. Define Sn = X1 + · · · + Xn. Then for all (a, b) such that −∞ a ≤ b ∞ lim n→∞ P a ≤ Sn − nµ σ √ n ≤ b = Z b a 1 √ 2π e− 1 2 z2 dz which is the p.d.f. of a N[0, 1] random variable. We write Sn − nµ σ √ n →D N(0, 1). which is read as ‘tends in distribution to’. - 3 - 2 - 1 0 1 2 3 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 - 3 - 2 - 1 0 1 2 3 0.2 0.4 0.6 0.8 1.0 Probability density functions of Sn−n √ n , n = 1, 2, 5, 10, 20, when X1, X2, . . . , Xn are i.i.d. exponentially distributed with µ = σ2 = 1. The proof uses the so-called continuity theorem, which we quote without proof. Theorem 23.2 (continuity theorem). If random variables X1, X2, . . . have moment generating functions mi(θ), i = 1, 2, . . . , and mi(θ) → m(θ) as i → ∞, pointwise for every θ, then Xi tends in distribution to the random variable having m.g.f. m(θ). Sketch proof of Central Limit Theorem. Without loss of generality, take µ = 0 and σ2 = 1 (since we can replace Xi by (Xi − µ)/σ. The m.g.f. of Xi is mXi (θ) = E eθXi = 1 + θE [Xi] + 1 2! θ2 E X2 i + 1 3! θ3 E X3 i + · · · = 1 + 1 2! θ2 + 1 3! θ3 E X3 i + · · · 90
  • 168. The m.g.f. of Sn/ √ n is E h e θ Sn √ n i = E h e θ √ n (X1+···+Xn) i = E h e θ √ n X1 i · · · E h e θ √ n Xn i = E h e θ √ n X1 in = mX1 (θ/ √ n) n = 1 + 1 2 θ2 1 n + 1 3! θ3 E X3 1 n3/2 + · · · n → e 1 2 θ2 as n → ∞ which is the m.g.f. of the N(0, 1) random variable. Remark. If the m.g.f. is not defined the proof needs the characteristic function: E eiθX . For example, the Cauchy distribution (defined in §20.3) has no moment generating function, since E[Xr ] does not exist for odd valued r. The characteristic function exists and is equal to e−|θ| . However, the CLT does not hold for Cauchy r.vs since the mean is undefined. 23.2 Normal approximation to the binomial If Sn ∼ B(n, p), so that Xi = 1 and 0 with probabilities p and 1 − p, respectively, then Sn − np √ npq ' N(0, 1). This is called the normal approximation the binomial distribution. It applies as n → ∞ with p constant. Earlier we discussed the Poisson approximation to the binomial, which applies when n → ∞ and np → λ. Example 23.3. Two competing airplanes fly a route. Each of n passengers selects one of the 2 planes at random. The number of passengers in plane 1 is S ∼ B(n, 1/2). Suppose each plane has s seats and let f(s) = P (S s) = P S − 1 2 n 1 2 √ n s − 1 2 n 1 2 √ n . 91
  • 169. Then S − np √ npq ' N(0, 1) =⇒ f(s) ≈ 1 − Φ 2s − n √ n . So if n = 1000 and s = 537 then (2s − n)/ √ n = 2.34. So Φ(2.34) ≈ 0.99, and f(s) ≈ 0.01. Planes need hold 1074 seats, only 74 in excess. Example 23.4. An unknown fraction of the electorate, p, vote Labour. It is desired to find p within an error not exceeding 0.005. How large should the sample be? Let the fraction of Labour votes in the sample be p0 = Sn/n. We can never be certain (without complete enumeration) that |p0 − p| ≤ 0.005. Instead choose n so that the event |p0 − p| ≤ 0.005 has probability ≥ 0.95. P (|p0 − p| ≤ 0.005) = P (|Sn − np| ≤ 0.005n) = P |Sn − np| √ npq ≤ 0.005 √ n √ pq Choose n such that the probability is ≥ 0.95. It often helps to make a sketch to see what is required: 0.4 0.3 0.2 0.1 0 1 1.96 3 -1 -1.96 -3 Z 1.96 −1.96 1 √ 2π e− 1 2 x2 dx = 0.95. We must choose n so that 0.005 √ n √ pq ≥ Φ−1 (0.975) = 1.96. But we don’t know p. But pq ≤ 1 4 with the worst case p = q = 1 2 . So we need n ≥ 1.962 0.0052 1 4 ' 38, 416. If we replace 0.005 by 0.03 then n ' 1, 068 will be sufficient. This is typical of the sample size used in commercial and newspaper opinion polls. Notice that the answer does not depend upon the total population. 92
  • 170. 23.3 Estimating π with Buffon’s needle Example 23.5. A needle of length ` is tossed at random onto a floor marked with parallel lines a distance L apart, where ` ≤ L. Recall from Example 18.5 that p = P(the needle intersects one of the parallel lines) = 2`/(πL). Suppose we drop the needle n times. The number of times that it hits a line will be N ∼ B(n, p), which is approximately N(np, np(1 − p)). We estimate p by p̂ = N/n, which is approximately N(p, p(1 − p)/n), and π by π̂ = 2` (N/n)L = π 2`/(πL) E(N/n) + (N/n − E(N/n)) = π p p + (p̂ − p) = π 1 − p̂ − p p + · · · So π̂ − π is approximately distributed as N(0, π2 p(1 − p)/(np2 )). The variance is minimized by taking p as large as possible, i.e. ` = L. Then π̂ − π ∼ N 0, (π − 2)π2 2n . P(|π̂ − π| 0.001) ≥ 0.95 ⇐⇒ 0.001 s 2n (π − 2)π2 ≥ Φ−1 (0.975) = 1.96. This requires n ≥ 2.16 × 107 . Example 23.6 [Buffon’s noodle]. Here is another way to show that p = 2`/(πL). Notice that a circle of diameter L, no matter where it is placed, intersects the parallel lines exactly twice. Approximate this circle by the boundary of a k-sided regular polygon made up of k = πL/δ rice grains, each of tiny length δ. The probability that a rice grain intersects a line is thus approximately 2/k = 2δ/(πL). (For a rigorous proof, one could use two polygons that are inscribed and superscribed to the circle.) A pin of length ` is like `/δ rice grains laid end to end, and so the expected number of times such a pin intersects the lines is (`/δ) × 2δ/(πL) = 2`/(πL). At most one rice grain intersects a line, so this must be p, the probability the pin intersects the lines. This also shows that the expected number of times that a “noodle” of length ` crosses the parallel lines is p, irrespective of the shape of the noodle So we might also toss a flexible wet noodle onto the lines, counting the number of crossings N obtained by tossing it n times. Again it is the case that π̂ = 2`n/(NL) → π. 93
  • 171. 24 Continuing studies in probability *Large deviations and Chernoff bound*. *Random matrices and Wigner’s semicircle law*. I’ll perhaps talk about courses in IB. Concluding remarks and wrap up. 24.1 Large deviations Example 24.1 [Gambler’s success]. John plays roulette at Las Vegas, betting £1 on red at each turn, which is then doubled, with probability p 1/2, or lost, with probability q = 1 − p. The wheel has 38 slots: 18 red, 18 black, 0 and 00; so p = 18/38. He tries to increase his wealth by £100. This is very unlikely, since by (15.1) the probability he should ever be up by 100 is (p/q)100 = (9/10)100 = 0.0000265. However, suppose this ‘large deviation’ occurs and after n games he is up by £100. What can we say about n, and about the path followed by John’s wealth? Preliminaries. Let Sn = X1 + · · · + Xn be the number of games he wins in the first n, where X1, . . . , Xn are i.i.d. B(1, p). His wealth is Wn = 2Sn − n. Let µ = EX1 = p and σ2 = Var(Xi) = pq. Note that EWn = −n/19 0. He must win n/2 + 50 games. How likely is P(Sn na) for a p? Using the Chebyshev bound P(Sn na) = P (Sn − nµ ≥ n(a − µ)) ≤ Var(Sn) n2(a − µ)2 = σ2 n(a − µ)2 . Alternatively, by the Central limit theorem, P(Sn na)) = P Sn − nµ √ nσ (a − µ) √ n σ ≈ 1 − Φ (a − µ) √ n σ . Both show that P(Sn na) → 0 as n → ∞. 24.2 Chernoff bound Another bound is as follows. Let m(θ) = EeθX1 be the m.g.f. of X1. Let θ 0, P(Sn na) = P eθSn eθna ≤ E[eθSn ] eθna (by Markov inequality) = m(θ) eθa n = e−n[θa−log m(θ)] . 94
  • 172. Now minimize the right-hand side over θ to get the best bound. This implies P(Sn na) ≤ e−nI(a) (the Chernoff bound), (24.1) where I(a) = maxθ0[θa − log m(θ)]. This bound is tight, in the sense that one can also prove that given any δ 0, P(Sn na) ≥ e−n(I(a)+δ) , (24.2) for all sufficiently large n. It follows from (24.1)–(24.2) that log P(Sn an) ∼ −nI(a). As usual, ∼ means that the quotient of the two sides tends to 1 as n → ∞. This holds for random variables more generally. For example, if Xi ∼ N(0, 1) then m(θ) = e 1 2 θ2 and I(a) = maxθ θa − 1 2 θ2 = 1 2 a2 . So log P(Sn an) ∼ −n1 2 a2 . For B(1, p) the m.g.f. is m(θ) = q + peθ and I(a) = max θ [θa − m(θ] = (1 − a) log 1 − a 1 − p + a log a p . The function I is convex in a, with its minimum being I(p) = 0. We can also verify the lower bound (24.2). Let jn = dnae. Then P(Sn na) = n X i≥jn n i pi (1 − p)n−i n jn pjn (1 − p)n−jn . By applying Stirling’s formula on the right hand side we may find: limn→∞(1/n) log P(Sn na) ≥ −I(a). Hence log P(Sn an) ∼ −nI(a). Most likely way to £100. Consider the path on which John’s wealth increases to 100. We can argue that this is most likely to look like a straight line. For instance, suppose Sn increases at rate a1 for n1 bets, and then rate a2 for n2 bets, where n1 + n2 = n and 2(n1a1 +n2a2)−n = 100. The log-probability of this is about −n1I(a1)−n2I(a2), which is maximized by a1 = a2, since I is a convex function. So the most likely route to 100 is over n bets, with Sn increasing at a constant rate a, and such that 2na − n = 100. Subject to these constraints log P(Sn an) ≈ −nI(a) is maximized by n = 100/(1 − 2p), a = 1 − 2p. This means it is highly likely that n ≈ 100/(1−2×(18/38)) = 1900. Interestingly, this is the same as the number of games over which his expected loss would be £100. This is an example from the theory of large deviations. Informally, we might say that if a rare event does occur then it does so in whatever manner is most likely. 95
  • 173. 24.3 Random matrices Random matrices arise in many places: as the adjacency matrix of a random graph, as the sample correlation matrix of a random sample of multivariate data, and in quantum physics, numerical analysis and number theory. Consider a symmetric n × n matrix A, constructed by setting diagonal elements 0, and independently choosing each off-diagonal aij = aji as 1 or −1 by tossing a fair coin. A random 10 × 10 symmetric matrix, having eigenvalues −4.515, −4.264, −2.667, −1.345, −0.7234, 1.169, 2.162, 2.626, 3.279, 4.277.               0 −1 −1 −1 1 1 1 1 −1 −1 −1 0 1 1 1 1 −1 −1 1 −1 −1 1 0 −1 −1 −1 −1 −1 −1 1 −1 1 −1 0 −1 1 1 −1 −1 −1 1 1 −1 −1 0 −1 −1 −1 −1 −1 1 1 −1 1 −1 0 −1 −1 1 1 1 −1 −1 1 −1 −1 0 1 −1 1 1 −1 −1 −1 −1 −1 1 0 1 −1 −1 1 −1 −1 −1 1 −1 1 0 −1 −1 −1 1 −1 −1 1 1 −1 −1 0               Recall that the eigenvalues of a symmetric real matrix are real. Let Λ be a randomly chosen eigenvalue of a random A. What can we say about Λ? Since A and −A are equally likely, EΛk = 0 if k is odd. Consider k = 4. Suppose the eigenvalues of A are λ1, . . . , λn. E[Λ4 ] = 1 n E[λ4 1 + · · · + λ4 n] = 1 n E[Tr(A4 )]. Now E[Tr(A4 )] = E X i1,i2,i3,i4 ai1i2 ai2i3 ai3i4 ai4i1 = X i1,i2,i3,i4 E[ai1i2 ai2i3 ai3i4 ai4i1 ] (24.3) where the sum is taken over all possible paths of length 4 through a subset of the n indices: i1 → i2 → i3 → i4 → i1. A term in (24.3), E[ai1i2 ai2i3 ai3i4 ai4i1 ], is either 1 or 0. It is 1 iff for each two indices i, j the total number of aijs and ajis contained in {ai1i2 , ai2i3 , ai3i4 , ai4i1 } is even. Let i, j, k range over triples of distinct indices. E[ai1i2 ai2i3 ai3i4 ai4i1 ] = 1 for • n(n − 1)(n − 2) terms of the form E[aijajiaikaki]; • n(n − 1)(n − 2) terms of the form E[aijajkakjaji]; • n(n − 1) terms of the form E[aijajiaijaji]. Thus, E[Λ4 /n 4 2 ] = n− 4 2 −1 E[Tr(A4 )] = n−3 [2n(n−1)(n−2)+n(n−1)] → 2 as n → ∞. The limit 2 is C2, the number of Dyke words of length 4. These words are ()() and (()), which correspond to the patterns of the first two bullet points above. 96
  • 174. This argument easily generalizes to any even k, to show limn→∞ E[(Λ/n 1 2 )k ] → Ck/2, a Catalan number, and the number of Dyck words of length k (described in §12.2). This begs the question: what random variable X has sequence of moments {EXk }∞ k=1 = {0, C1, 0, C2, 0, C3, . . . } = {0, 1, 0, 2, 0, 5, 0, 14, . . . }? It is easy to check that this is true when X has the p.d.f. f(x) = 1 2π p 4 − x2, −2 ≤ x ≤ 2. Here is a histogram of 50,000 eigenvalues ob- tained by randomly generating 500 random 100 × 100 matrices. Bin sizes are of width 1. Consistent with the above analysis, λ/ √ 100 has empirical density closely matching f. Rescaling appropriately, the red semicircle is g(x) = 50000 1 10 f( x 10 ), −20 ≤ x ≤ 20. 10 0 10 20 20 500 1000 1500 - - This result is Wigner’s semicircle law. Notice that our argument does not really need the assumption that aij are chosen from the discrete uniform distribution on {−1, 1}. We need only that Eak ij = 0 for odd k and Eak ij ∞ for even k. This means that Wigner’s theorem is in the same spirit as the Central limit theorem in Lecture 23, which holds for any random variable with finite first two moments. Wigner’s theorem dates from 1955, but the finer analysis of the eigenvalues structure of random matrices interests researchers in the present day. 24.4 Concluding remarks In §24.1–24.3 we have seen some fruits of research in probability in modern times. In doing so we have touched on many topics covered in our course: Markov and Chebyshev inequalities, moment generating function, sums of Bernoulli r.vs, Stirling’s formula, normal distribution, gambler’s ruin, Dyke words, generating functions, and Central limit theorem. In Appendix H you can find some notes about further courses in the Mathematical Tripos in which probability features. I’ll give a final overview of the course and wrap up. 97
  • 175. A Problem solving strategies Students sometimes say that the theory in Probability IA is all very well, but that the tripos questions require ingenuity. That is sometimes true, of questions like 2004/II/12: ”Planet Zog is a sphere with centre O. A number N of spaceships land at random on its surface. . . . ”. Of course this makes the subject fun. But to help with this, let’s compile a collection of some frequently helpful problem solving strategies. 1. You are asked to find P(A). For example, A might be the event that at least two out of n people share the same birthday. Might it be easier to calculate P(Ac )? The simple fact that P(A) = 1−P(Ac ) can sometimes be remarkably useful. For example, in Lecture 1 we calculated the probability that amongst n people no two have the same birthday. 2. You are asked to find P(A). Is there a partition of A into disjoint events, B1, B2, . . . , Bn, so that P(A) = P i P(Bi)? Probabilities of intersections are usually easier to calculate than probabilities of unions. Fortunately, the inclusion- exclusion formula gives us a way to convert between the two. 3. You are asked to find P(A). Can A be written as the union of some other events? Might the inclusion-exclusion formula be helpful? 4. The expectation of a sum of random variables is the sum of their expectations. The random variables need not be independent. This is particularly useful when applied to indicator random variables. You are asked to find EN, the expected number of times that something occurs. Can you write N = I1 + · · · + Ik, where this is a sum of indicator variables, each of which is concerned with a distinct way in which N can be incremented? This idea was used Example 8.3 in lectures, where N was the number of couples seated next to one another. It is the way to do the Planet Zog question, mentioned above. You can use it in Examples sheet 2, #11. 5. You are asked to place a bound on some probability. Can you use one of the inequalities that you know? Boole, Bonferroni, Markov, Chebyshev, Jensen, Cauchy-Schwarz, AM-GM). 6. You are asked something about sums of independent random variables. Might a probability generating function help? For example, suppose X has the geometric distribution P(X = r) = (1/2)r+1 , r = 0, 1, . . . . Is it possible for X to have the same distribution as Y1 + Y2, where Y1, Y2 are some two independent random variables with the same distribution? Hint: what would the p.g.f. of Yi have to be? 7. This is like 2 above, but with the tower property of conditional expectation. You are asked to find EX. Maybe there is some Y so that E[X] is most easily computed as E[E[X | Y ]] = P y E[X | Y = y] P(Y = y). 98
  • 176. 8. Learn well all the distributions that we cover in the course and understand the relations between them. For example, if X ∼ U[0, 1] then − log X ∼ E (1). Learn also all special properties: such the memoryless property of the geometric and exponential distributions. As you approach a question ask yourself, what distri- bution(s) are involved in this question? Is some special property or relationship useful? 9. You are given the joint density function of continuous random variables X1, . . . , Xn and want to prove that they are independent. Try to spot, by in- spection, how to factor this as f(x1, . . . , xn) = f1(xi) · · · fn(xn), where each fi is a p.d.f. 10. In questions about transformation of continuous random variables there are a couple things to look out for. Always start with a bijection between R ⊆ Rn and S ⊆ Rn and make sure that you specify S correctly. If Y1, . . . , Yn is more variables that really interest you, then, having found the joint density of them all, you can always integrate out the superfluous ones. In computing the Jacobian J, remem- ber that it is sometimes easier to compute 1/J = ∂(y1, . . . , yn)/∂(x1, . . . , xn). 99
  • 177. B Fast Fourier transform and p.g.fs Although not examinable, a study of how the fast Fourier transform (FFT) can be used to sum random variables provides a good exercise with probability generating functions. This method is used in practice in financial mathematics, such as when calculating the aggregate loss distribution of a portfolio of insurance risks. Suppose we wish to find the distribution of Y = X1 + X2, where the Xi are i.i.d. and have have p.g.f. p(z) = p0 + p1z + · · · + pN−1zN−1 . The p.g.f. of Y is pY (z) = p(z)2 , which can be found by making O(N2 ) multiplications of the form pipj. Assuming multiplications take time, we say the time-complexity is O(N2 ). With the Fast Fourier Transform we can reduce the time-complexity to O(N log N). The steps are as follows. (a) Compute p(z) at each z = ω0 , ω1 . . . , ω2N−1 . where ω = e−2πi/(2N) . This is the discrete Fourier transform (DFT) of the sequence (p0, p1, . . . , pN−1). (b) Compute pY (z) = p(z)2 , at each z = ω0 , ω1 . . . , ω2N−1 . (c) Recover the distribution (P(Y = y), y = 0, . . . , 2N − 2). To do this we use an inverse DFT, for which the calculation is almost the same as doing a DFT, as in step (a). Step (b) takes 2N multiplications. Steps (a) and (c) are computed using the fast Fourier transform in O 2N log(2N) multiplications (of complex numbers). A feeling for way in which the FFT simplifies the calculation can be obtained by studying below that case N = 4, ω = e−iπ/2 = −i. Notice how the 16 multiplications in the left-hand table become many fewer multiplications in the right-hand table. Note also that (p0 + p2, p0 − p2) is the DFT of (p0, p2). z p(z) ω0 p0ω0 + p1ω0 + p2ω0 + p3ω0 ω1 p0ω0 + p1ω1 + p2ω2 + p3ω3 ω2 p0ω0 + p1ω2 + p2ω4 + p3ω6 ω3 p0ω0 + p1ω3 + p2ω6 + p3ω9 = p(z) (p0 + p2) + (p1 + p3) (p0 − p2) + ω(p1 − p3) (p0 + p2) − (p1 + p3) (p0 − p2) − ω(p1 − p3) The key idea is to find DFTs of the sequences (p0, p2, . . . , N −2), and (p1, p3, . . . , N −1) and then combine them to create the DFT of (p0, p1, p2, p3, . . . , N−1). Combining takes only O(N) multiplications. Suppose N is a power of 2. We may recursively repeat this trick of division into two half-size problems, until we are making DFTs of sequences of length 2. Because we repeat the trick log2 N times, and N multiplications are needed at each stage, the FFT is of time complexity O(N log N). 100
  • 178. C The Jacobian We quoted without proof in §20.1 the fact that the Jacobian gives the right scal- ing factor to insert at each point when one wishes to compute the integral of a function over a subset of Rn after a change of variables. We used this to argue that if X1, . . . , Xn have joint density f, then Y1, . . . , Yn have joint density g, where g(y1, . . . , yn) = f(x1(y1, . . . , yn), . . . , xn(y1, . . . , yn))|J|, and J (the Jacobian) is the determinant of the matrix whose (i, j)th element is ∂xi/∂yj. This works because: (i) every differentiable map is locally linear, and (ii) under a linear change of coordinates, such as y = Ax, a cube in the x−coordinate system becomes a parallelepiped in the y-coordinate system. The n-volume of a parallelepiped is the determinant of its edge vectors (i.e. columns of A). For a proof this fact see this nice essay, A short thing about determinants, by Gareth Taylor. Warning. What follows next is peripheral to Probability IA. It’s a digression I set myself to satisfy my curiosity. I know that in IA Vector Calculus you see a proof for change of variables with Jacobian in R2 and R3 . But so far as I can tell, there is no course in the tripos where this formula is proved for Rn . I have wondered how to make the formula seem more intuitive, and explain the why |J| plays the role it does. Here now, for the curious, is a motivating argument in Rn . I use some facts from IA Vectors and Matrices. Let’s start with a 1–1 linear map, y = r(x) = Ax, where A is n × n and invertible. The inverse function is x = s(y) = A−1 y. Let Q = AT A. This matrix is positive definite and symmetric, and so has positive real eigenvalues.1 Consider the sphere S = {y : yT y ≤ 1}. Its preimage is R = {x : (Ax)T Ax ≤ 1} = {x : xT Qx ≤ 1}. Let e1, . . . , en be orthogonal unit-length eigenvectors of Q, with corresponding eigen- values λ1, . . . , λn; the eigenvalues are strictly positive. Then R = x : x = P i αiei, α ∈ Rn , P i(αi √ λi)2 ≤ 1 , which is an ellipsoid in Rn , whose orthogonal axes are in the directions of the eis. We can view this ellipsoid as having been obtained from a unit sphere by rescaling, (i.e. squashing or stretching), by factors λ −1/2 1 , . . . , λ −1/2 n in the directions e1, . . . , en, respectively. The volume is altered by the product of these factors. The determinant of a matrix is the product of its eigenvalues, so vol(R) vol(S) = 1 (λ1 · · · λn)1/2 = 1 p det(AT A) = 1 | det(A)| = | det(A−1 )| = |J|, 1If you don’t already know these facts, they are quickly explained. If z is an eigenvector, with eigenvalue λ, and z̄ is its complex conjugate, then Qz = λz and Qz̄ = λ̄z̄. So zT Qz̄ − z̄T Qz = 0 = (λ̄ − λ)z̄T z, hence λ is real. Also, z 6= 0 =⇒ Az 6= 0 =⇒ λzT z = zT Qz = (Az)T (Az) 0, so λ 0. 101
  • 179. where |J| = ∂(x1, . . . , xn) ∂(y1, . . . , yn) . So vol(R) = |J| vol(S). Now reimagine S, not as a unit sphere, but a very tiny sphere centred on ȳ. The sphere is so small that f(s(y)) ≈ f(s(ȳ)) for y ∈ S. The preimage of S is the tiny ellipsoid R, centred on x̄ = (s1(ȳ), . . . , sn(ȳ)). So Z x∈R f(x) dx1 . . . dxn ≈ f(x̄) vol(R) = f(s1(ȳ), . . . , sn(ȳ))|J| vol(S) ≈ Z y∈S f(s1(y), . . . , sn(y))|J| dy1 . . . dyn. (C.1) The above has been argued for the linear map y = r(x) = Ax. But locally any differentiable map is nearly linear. Further details are needed to complete a proper proof. We will need to approximate the integrals over some more general regions R and S within Rn by sums of integrals over tiny sphere and ellipsoids, making appropriate local linearizations of functions r and f, where r : R → S and f : R → R+ . But at least after reaching (C.1) we have an intuitive reason for the formula used in §20.1. 102
  • 180. D Beta distribution The Beta(a, b) distribution has p.d.f. f(x : a, b) = Γ(a + b) Γ(a)Γ(b) xa−1 (1 − x)b−1 , 0 ≤ x ≤ 1, where a, b 0. It is used by actuaries to model the loss of an insurance risk and is a popular choice for the following reasons. 1. The beta distribution has a minimum and a maximum. Reinsurance policies for catastrophic risks are typically written so that if a claim is made then the insurer’s exposure will lie between some known minimum and maximum values. Suppose a policy insures against a loss event (an earthquake, say) that occurs with a probability of p = 1 × 10−2 per year. If the event occurs the claim will cost the insurer between 100 and 200 million pounds. The annual loss could be modelled by the r.v. (100 + 2X)Y , where Y ∼ B(1, p) and X ∼ Beta(a, b). 2. The two parameters can be chosen to fit a given mean and variance. The mean is a a+b . The variance is ab (a+b)2(a+b+1) . 3. The p.d.f. can take many different shapes. 0.2 0.4 0.6 0.8 1.0 1 2 3 However, the sum of beta distributed r.vs is nothing simple. A portfolio might consist of 10,000 risks, each assumed to be beta distributed. To calculate P( P10000 i=1 Xi t) one must discretize the distributions and use discrete Fourier transforms, as described in Appendix B. The moment generating function of the B(a, b) distribution is complicated! m(θ) = 1 + ∞ X k=1 k−1 Y r=0 a + r a + b + r ! θk k! . 103
  • 181. E Kelly criterion I have not this found a place to include this during a lecture this year, but I leave it in the notes, as some people may enjoy reading it. Maybe I will use this another year. Consider a bet in which you will double your money or lose it all, with probabilities p = 0.52 and q = 0.48 respectively. You start with £X0 and wish to place a sequence of bets and maximize the expected growth rate. If at each bet you wager a fraction f of your capital then EXn+1 = p(1 + f)Xn + q(1 − f)Xn = Xn + (p − q)fXn. This is maximized by f = 1, but then there is a large probability that you will go bankrupt. By choosing f 1 you will never go bankrupt. Suppose after n bets you have £Xn. You wish to choose f so as to maximize the compound growth rate, CGR = (Xn/X0)1/n − 1 = exp 1 n Pn i=1 log(Xi/Xi−1) − 1. Now 1 n Pn i=1 log(Xi/Xi−1) → E log(X1/X0) as n → ∞ and this should be maximized. E log(X1/X0) = p log(1 + f) + q log(1 − f) which is maximized by f = p − q. This is the Kelly criterion. 1000 2000 3000 4000 5000 -4 -2 2 4 6 For p = 0.52 a plot of log Xn, n = 0, . . . , 5000, for Kelly betting f = 0.04 (red) and for f = 0.10 (blue). 104
  • 182. F Ballot theorem I have not this found a place to include this during a lecture this year, but I leave it in the notes, as some people may enjoy reading it. Maybe I will use this another year. A famous problem of 19th century probability is Bertrand’s ballot problem, posed as the question: “In an election where candidate A receives a votes and candidate B receives b votes with a b, what is the probability that A will be strictly ahead of B throughout the count?” Equivalently, we are asking for the number of paths of length a + b that start at the origin and end at T = (a + b, a − b). Since every good path must start with an upstep, there are as many good paths as there are paths from (1, 1) to T that never touch the x-axis. The set of paths from (1, 1) to T that do touch the x-axis is in one-to-one correspondence with the set of paths from (1, −1) to T; this is seen by reflecting across the x-axis the initial segment of the path that ends with the step that first touches the x-axis. Subtracting the number of these paths from the number of all paths from (1, 1) to T produces the number of good paths: a + b − 1 a − 1 − a + b − 1 a = a − b a + b a + b a . So the answer to Bertrand’s question is a−b a+b . Alternatively, we can derive this answer by noticing that the probability that the first step is down, given that the path ends at T is b/(a + b). So the number of paths from (1, −1) to T is b a+b a+b a . The number that go from (0, 0) to T without returning to the x-axis is therefore (1 − 2 b a+b ) a+b a = a−b a+b a+b a . 105
  • 183. G Allais paradox I have not this found a place to include this during a lecture this year, but I leave it in the notes, as some people may enjoy reading it. This is an interesting paradox in the theory of gambling and utility. Which of the following would you prefer, Gamble 1A or 1B? Experiment 1 Gamble 1A Gamble 1B Winnings Chance Winnings Chance $1 million 100% $1 million 89% Nothing 1% $5 million 10% Now look at Gambles 2A and 2B. Which do you prefer? Experiment 2 Gamble 2A Gamble 2B Winnings Chance Winnings Chance Nothing 89% Nothing 90% $1 million 11% $5 million 10% When presented with a choice between 1A and 1B, most people choose 1A. When presented with a choice between 2A and 2B, most people choose 2B. But this is inconsistent! Why? Experiment 1 is the same as Experiment 2, but with the added chance of winning $1 million with probability 0.89 (irrespective of which gamble is taken). 106
  • 184. H IB courses in applicable mathematics Here is some advice about courses in applicable mathematics that can be studied in the second year: Statistics, Markov Chains and Optimization courses. Statistics. Statistics addresses the question, “What is this data telling me?” How should we design experiments and interpret their results? In the Probability IA course we had the weak law of large numbers. This underlies the frequentist approach of estimating the probability p with which a drawing pin lands “point up” by tossing it many times and looking at the proportion of landings “point up”. Bayes Theorem also enters into statistics. To address questions about estimation and hypothesis testing we must model uncertainty and the way data is arises. That gives Probability a central role in Statistics. In the Statistics IB course you will put to good use what you have learned about random variables and distributions this year. Markov chains. A Markov Chain is a generalization of the idea of a sequence of i.i.d. r.vs., X1, X2, . . . . There is a departure from independence because we now allow the distribution of Xn+1 to depend on the value of Xn. Many things in the world are like this: e.g. tomorrow’s weather state follows in some random way from today’s weather state. You have already met a Markov chain in the random walks that we have studied in Probability IA. In Markov Chains IB you will learn many more interesting things about random walks and other Markov chains. If I were lecturing the course I would tell you why Polya’s theorem about random walk implies that it is possible to play the clarinet in our 3-D world but that this would be impossible in 2-D Flatland. Optimization. Probability is less important in this course, but it enters when we look at randomizing strategies in two-person games. You probably know the game scissors- stone-paper, for which the optimal strategy is to randomize with probabilities 1/3, 1/3, 1/3. In the Optimization course you will learn how to solve other games. Here is one you will be able to solve (from a recent Ph.D. thesis I read): I have lost k possessions around my room (keys, wallet, phone, etc). Searching location i costs ci. “Sod’s Law” predicts that I will have lost my objects in whatever way makes the task of finding them most difficult. Assume there are n locations, and cost of searching location i is ci. I will search until I find all my objects. What is Sod’s Law? How do I minimize my expected total search cost? (I will have to randomize.) Part II. The following Part II courses are ones in which Probability features. If are interested in studying these then do Markov Chains in IB. For Probability and Measure it is also essential to study Analysis II in IB. Probability and Measure Stochastic Financial Models Applied Probability Coding and Cryptography Principles of Statistics Optimization and Control 107
  • 185. Index absorption, 61 aggregate loss distribution, 52 Allais paradox, 106 arcsine law, 5 atom, 64 axioms of probability, 14 ballot theorem, 105 Baye’s formula, 23 Bell numbers, 8, 9 Benford’s law, 45 Bernoulli distribution, 20 Bernoulli trials, 20 Bertrand’s paradox, 72 beta distribution, 85, 103 binomial coefficient, 9 binomial distribution, 20, 31 bivariate normal random variable, 88 Bonferroni’s inequalities, 18 Boole’s inequality, 15 branching process, 54 generating function, 54 probability of extinction, 56 Buffon’s needle, 73, 93 Catalan number, 49 Cauchy distribution, 81, 91 Cauchy-Schwarz inequality, 39 Central limit theorem, 90 characteristic function, 52 Chebyshev inequality, 42 Chernoff bound, 94 classical probability, 1 concave function, 38 conditional distribution, 50 expectation, 50 probability, 22 conditional entropy, 53 continuity theorem, 90 continuous random variable, 62 continuum, 62 convex function, 38 convolution, 52, 80, 81 correlation coefficient, 41 covariance, 40 cumulative distribution function, 63 dependent events, 19 derangement, 17 discrete distribution, 21 discrete Fourier transform, 100 discrete random variable, 27 discrete uniform distribution, 27 disjoint events, 2 distribution, 27 distribution function, 63 Dyck word, 49, 96 Efron’s dice, 36 entropy, 41, 53 Erlang distribution, 85 event, 2 expectation, 27, 67 of a sum, 30, 51, 52 expectation ordering, 36, 68 experimental design, 36 exponential distribution, 64, 82, 84 memoryless property, 64 moment generating function of, 84 extinction probability, 56 fast Fourier transform, 53, 100 Fibonacci number, 45, 49 Fundamental rule of counting, 6 gambler’s ruin, 23, 58 duration of the game, 60 gamma distribution, 84, 85 generating functions, 48, 54, 61 geometric distribution, 21, 31 memoryless property, 64 geometric probability, 71, 73 108
  • 186. hazard rate, 65 hypergeometric distribution, 21 Inclusion-exclusion formula, 17, 33 independent events, 19 random variables, 34 independent random variables, 71 indicator function, 32 information entropy, 41, 53 inspection paradox, 69 insurance industry, 21, 103 Jacobian, 79, 101 Jensen’s inequality, 38 joint distribution, 50 joint distribution function, 70 joint moment generating function, 89 joint probability density function, 70 jointly distributed continuous random variables, 70 Kelly criterion, 104 Kolmogorov, 14 large deviations, 94, 95 law of total expectation, 51 law of total probability, 23 marginal distribution, 50, 70 Markov inequality, 42 mean, 27 median, 76 memoryless property, 21, 64 mode, 76 moment, 84 moment generating function, 83 multivariate, 89 multinomial coefficient, 10 distribution, 20 multivariate normal density, 88 mutually exclusive, 2 mutually independent, 20 non-transitive dice, 36 normal approximation the binomial, 91 normal distribution, 74 bounds on tail probabilities, 87 moment generating function of, 86 observation, 2 order statistics, 76, 82 partition of the sample space, 23, 51 Poisson distribution, 21, 31 approximation to the binomial, 21 probabilistic method, 16 probability axioms, 14 density function, 62 distribution, 14, 20 generating function, 46 mass function, 27 measure, 14 space, 14 Ramsey number, 16 random matrices, 96 random sample, 76 random variable, 27 continuous, 62 discrete, 27 random walk, 58 reflection principle, 5, 105 sample mean, 76 sample median, 76 sample space, 2 Simpson’s paradox, 24 standard deviation, 30 standard normal distribution, 74 Stirling’s formula, 10 stochastic ordering, 36, 68, 69 strictly convex, 38 strong law of large numbers, 44 subadditive set function, 15 submodular function, 15 sum of random variables, 48, 51, 80 symmetric random walk, 58 109
  • 187. tower property of conditional expectation, 51 transformation of random variables, 78, 82, 101 uniform distribution, 62, 64, 68 value at risk, 52 variance, 30, 68 variance of a sum, 35, 40 Weak law of large numbers, 43 Wigner’s semicircle law, 97 Zipf’s law, 33 110