CS3230 Tutorial 7
Midterm Review
(AY 25/26 Semester 2)
March 9, 2026
(Prepared by Benson, some questions stolen from Xing Chen)
1 / 19
Contents
Midterm Review
Asymptotic Analysis
Recurrences and Master Theorem
Proof of Correctness
Lower Bounds (Decision Trees)
Average-Case Analysis and Randomized Algorithms
Freivalds’ algorithm
2 / 19
Asymptotic Analysis
Big-O Notation (Upper Bound ):
f (n) = O(g(n))
iff c
n
0
[n > n
0
] f (n) c · g(n)
Big-Omega Notation (Lower Bound ):
f (n) = Ω(g(n))
iff c
n
0
[n > n
0
] f (n) c · g(n)
Big-Theta Notation (Exact Bound):
f (n) = Θ(g(n))
iff
c
1
, c
2
n
0
[n > n
0
] c
1
· g(n) f (n) c
2
· g(n)
Little o Notation (Strict Upper Bound <):
f (n) = o(g(n))
iff c
n
0
[n n
0
] f (n) < c · g(n)
Little Omega Notation (Strict Lower Bound >):
f (n) = ω(g(n))
iff c
n
0
[n n
0
] f (n) > c · g(n)
3 / 19
Asymptotic Analysis
Poll (Q1): Let’s consider our favourite function once again!
10 20 30 40 50 60 70 80 90
10
20
30
40
50
N
f (N)
4 / 19
Asymptotic Analysis
Poll (Q1): Select ALL correct statements.
Note: All options use NOT IN (/).
A. f (n) / Θ(n)
For any c
1
, c
2
, n
0
, find a point n > n
0
such that f (n) = 0 and clearly c
1
·n > f (n).
B. f (n) / O(n)
Take c = 1 and n
0
= 10, we have f (n) n.
C. f (n) / Ω(1)
For any c, n
0
, find a point n > n
0
such that f (n) = 0 and clearly f (n) < c · 1.
D. f (n) / o(n + 1)
Note that the +1 doesn’t change the asymptotic meaning.
Take c = 0.1, we can find infinitely many points n such that f (n) > c · (n + 1).
5 / 19
Asymptotic Analysis
Poll (Q1): Select ALL correct statements.
Note: All options use NOT IN (/).
A. f (n) / Θ(n)
For any c
1
, c
2
, n
0
, find a point n > n
0
such that f (n) = 0 and clearly c
1
·n > f (n).
B. f (n) / O(n)
Take c = 1 and n
0
= 10, we have f (n) n.
C. f (n) / Ω(1)
For any c, n
0
, find a point n > n
0
such that f (n) = 0 and clearly f (n) < c · 1.
D. f (n) / o(n + 1)
Note that the +1 doesn’t change the asymptotic meaning.
Take c = 0.1, we can find infinitely many points n such that f (n) > c · (n + 1).
5 / 19
Recap: Recurrences
Telescoping: Divide both sides by a term to obtain
T (n)
g(n)
=
T (n/b)
g(n/b)
+ h(n).
Master Theorem: Given T (n) = aT (
n
b
) + f (n). There are 3 cases:
Case Results
1 f (n) = O(n
log
b
aε
), ε > 0 T (n) = Θ(n
log
b
a
)
2 f (n) = Θ(n
log
b
a
log
k
n), k 0 T (n) = Θ(n
log
b
a
log
k+1
n)
3
f (n) = Ω(n
log
b
a+ε
), ε > 0
Regularity condition: af (
n
b
) kf (n), k < 1
T (n) = Θ(f (n))
Recursion Tree: Naively sum up all the work in the tree. Might not work!
Substitution Method: When all the above methods don’t work, “guess and
check”.
6 / 19
Recap: Recurrences
Poll (Q2): Give a tight asymptotic bound for T (n) = 4T
n
2
+ n log n.
A. T (n) = Θ(n log n)
B. T (n) = Θ(n log
2
n)
C. T (n) = Θ(n
2
)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Since n log n = O(n
2ε
) where ε = 0.1, Case
#1 of Master Theorem applies.
T (n) = Θ(n
2
).
7 / 19
Recap: Recurrences
Poll (Q2): Give a tight asymptotic bound for T (n) = 4T
n
2
+ n log n.
A. T (n) = Θ(n log n)
B. T (n) = Θ(n log
2
n)
C. T (n) = Θ(n
2
)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Since n log n = O(n
2ε
) where ε = 0.1, Case
#1 of Master Theorem applies.
T (n) = Θ(n
2
).
7 / 19
Recap: Recurrences
Poll (Q3): Give a tight asymptotic bound for T (n) = 4T
n
2
+
n
2
log n
.
A. T (n) = Θ
n
2
log n
B. T (n) = Θ(n
2
)
C. T (n) = Θ(n
2
log log n)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Divide both sides by n
2
and apply telescoping,
we get
T (n)
n
2
=
1
log n
+
1
log n 1
+ ···. Hence,
T (n) = Θ(n
2
log log n) by the harmonic series.
8 / 19
Recap: Recurrences
Poll (Q3): Give a tight asymptotic bound for T (n) = 4T
n
2
+
n
2
log n
.
A. T (n) = Θ
n
2
log n
B. T (n) = Θ(n
2
)
C. T (n) = Θ(n
2
log log n)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Divide both sides by n
2
and apply telescoping,
we get
T (n)
n
2
=
1
log n
+
1
log n 1
+ ···. Hence,
T (n) = Θ(n
2
log log n) by the harmonic series.
8 / 19
Recap: Recurrences
Poll (Q4): Give a tight asymptotic bound for T (n) = T
n
2
+ 2T
n
4
+ 2n.
A. T (n) = Θ(n)
B. T (n) = Θ(n log n)
C. T (n) = Θ(n
2
)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Substituting T (n) = n, the recursive part sums
to n and the merging part is Θ(n). Therefore,
each layer of the recursion tree has balanced
work Θ(n), and there are log n layers.
9 / 19
Recap: Recurrences
Poll (Q4): Give a tight asymptotic bound for T (n) = T
n
2
+ 2T
n
4
+ 2n.
A. T (n) = Θ(n)
B. T (n) = Θ(n log n)
C. T (n) = Θ(n
2
)
D. T (n) = Θ(n
2
log n)
Given T (n) = af (
n
b
) + f (n).
1. f (n) = O(n
log
b
aε
), ϵ > 0: T (n) = Θ(n
log
b
a
)
2. f (n) = Θ(n
log
b
a
log
k
n), k 0: T (n) = Θ(n
log
b
a
log
k+1
n)
3. f (n) = Ω(n
log
b
a+ε
), ϵ > 0 and af (
n
b
) kf (n), k < 1: T (n) = Θ(f (n))
Solution.
Substituting T (n) = n, the recursive part sums
to n and the merging part is Θ(n). Therefore,
each layer of the recursion tree has balanced
work Θ(n), and there are log n layers.
9 / 19
Proof of Correctness
Iterative Algorithms: Use a loop invariant.
Loop invariant = Expected progress of the algorithm at a certain snapshot.
Descriptive enough to ensure initialization, maintenance and termination.
Recursive Algorithms: Proof by induction on the problem size.
10 / 19
Lower Bounds (Decision Trees)
Problem: Sorting n = 3 distinct numbers by comparing two numbers each time.
Leaves: Output / decision for the input
What are the decisions?
How many different decisions?
(Internal) Nodes: A comparison
Branches: Outcome of comparison
What are the outcomes?
What will be the implication?
Worst Case (number of comparisons)
= Height of Tree
1
?
< 2
2
?
< 3 1
?
< 3
! %
1
?
< 3 2
?
< 3
% %
1, 2, 3
!
2, 1, 3
!
1, 3, 2 3, 1, 2
! %
2, 3, 1 3, 2, 1
! %
11 / 19
Lower Bounds (Decision Trees)
Poll (Q5): There are N students {1, 2, . . . , N} interning at N companies {1, 2, . . . , N}.
Each student interns at one different company. At least how many Yes/No questions
would you have to ask to find out the mapping between the students and the
companies?
A. Ω(N)
B. Ω(N log N)
C. Ω(N
log
2
3
)
D. Ω(N
2
)
Solution.
There are N! possible mappings.
By decision tree model, we need at least
log
2
N! = Ω(N log N) questions.
12 / 19
Lower Bounds (Decision Trees)
Poll (Q5): There are N students {1, 2, . . . , N} interning at N companies {1, 2, . . . , N}.
Each student interns at one different company. At least how many Yes/No questions
would you have to ask to find out the mapping between the students and the
companies?
A. Ω(N)
B. Ω(N log N)
C. Ω(N
log
2
3
)
D. Ω(N
2
)
Solution.
There are N! possible mappings.
By decision tree model, we need at least
log
2
N! = Ω(N log N) questions.
12 / 19
Average-Case Analysis and Randomized Algorithms
Deterministic Algorithms
Sometimes, we assume the
input is random.
Most of the time, we focus on
the worst case.
Randomized Algorithms
Monte-Carlo Algorithm: Probability of
success?
Las-Vegas Algorithm: Expected
runtime?
We focus on the worst case input.
The randomness comes
from the algorithm, not the input.
13 / 19
Randomized Algorithms
Our weapons in analyzing randomized algorithms:
Markov’s Inequality: P[X k]
E[X ]
k
for any non-negative r.v. X .
Union bound: P[
S
i
X
i
]
P
i
P[X
i
].
Linearity of Expectation: E[X + Y ] = E[X ] + E[Y ].
Probability Amplification: If each trial succeeds with independent probability p,
repeat it k times and it will succeed at least once with probability 1 (1 p)
k
.
14 / 19
Randomized Algorithms
Poll (Q6): Consider a random permutation of length n. What is the expected number
of indices i such that P[i] = i?
A. 1/n
B. 1
C. 2
D. None of the other options
Solution.
Let X
i
be the indicator random variable that P[i] = i.
Since P is a random permutation, P[X
i
= 1] = 1/n.
By linearity of expectation, E[
P
X
i
] =
P
E[X
i
] = n ·
1/n = 1.
15 / 19
Randomized Algorithms
Poll (Q6): Consider a random permutation of length n. What is the expected number
of indices i such that P[i] = i?
A. 1/n
B. 1
C. 2
D. None of the other options
Solution.
Let X
i
be the indicator random variable that P[i] = i.
Since P is a random permutation, P[X
i
= 1] = 1/n.
By linearity of expectation, E[
P
X
i
] =
P
E[X
i
] = n ·
1/n = 1.
15 / 19
Randomized Algorithms
Poll (Q7): Throw N balls into N bins. However, for the i-th ball (1 i N), you
randomly choose between bins 1 i instead of 1 N. What is the expected number
of balls in bin 1?
A. Θ(1)
B. Θ(log N)
C. Θ(
N)
D. None of the other options
Solution.
Let X
i
be the indicator random variable that ball i ends
up in bin 1.
P[X
i
= 1] = 1/i.
By linearity of expectation, E[
P
X
i
] =
P
E[X
i
] =
1/1 + 1/2 + ··· + 1/N = Θ(log N) (harmonic series).
16 / 19
Randomized Algorithms
Poll (Q7): Throw N balls into N bins. However, for the i-th ball (1 i N), you
randomly choose between bins 1 i instead of 1 N. What is the expected number
of balls in bin 1?
A. Θ(1)
B. Θ(log N)
C. Θ(
N)
D. None of the other options
Solution.
Let X
i
be the indicator random variable that ball i ends
up in bin 1.
P[X
i
= 1] = 1/i.
By linearity of expectation, E[
P
X
i
] =
P
E[X
i
] =
1/1 + 1/2 + ··· + 1/N = Θ(log N) (harmonic series).
16 / 19
Randomized Algorithms
Poll (Q8): Let X be a (not necessarily positive) discrete random variable such that
E[X ] = k. Select ALL correct statements.
A. P[X k] > 0
There must be a value the expected value (proof by contradiction).
B. P[X k] > 0
There must be a value the expected value (proof by contradiction).
C. P[X = k] > 0
Not always true. Imagine X = k 1 or X = k + 1 each with probability 0.5.
D. P[X 2k] 0.5
Not always true. We need X 0 to apply Markov’s inequality.
17 / 19
Randomized Algorithms
Poll (Q8): Let X be a (not necessarily positive) discrete random variable such that
E[X ] = k. Select ALL correct statements.
A. P[X k] > 0
There must be a value the expected value (proof by contradiction).
B. P[X k] > 0
There must be a value the expected value (proof by contradiction).
C. P[X = k] > 0
Not always true. Imagine X = k 1 or X = k + 1 each with probability 0.5.
D. P[X 2k] 0.5
Not always true. We need X 0 to apply Markov’s inequality.
17 / 19
Randomized Algorithms: Freivalds’ algorithm
Poll (Q9): You have repeated Freivalds’ algorithm 1000 times, and it outputs YES 998
times and NO 2 times. Select the correct statement.
A. It must be the case that AB = C .
B. It must be the case that AB = C .
C. It’s possible that AB = C or AB = C .
D. Contradiction: This outcome is impossible to attain.
Solution.
When AB = C , algorithm always output YES (we should not see any NO).
When AB = C , the algorithm can output anything.
18 / 19
Randomized Algorithms: Freivalds’ algorithm
Poll (Q9): You have repeated Freivalds’ algorithm 1000 times, and it outputs YES 998
times and NO 2 times. Select the correct statement.
A. It must be the case that AB = C .
B. It must be the case that AB = C .
C. It’s possible that AB = C or AB = C .
D. Contradiction: This outcome is impossible to attain.
Solution.
When AB = C , algorithm always output YES (we should not see any NO).
When AB = C , the algorithm can output anything.
18 / 19
That’s it!
Good luck for your midterm!
19 / 19