Quote from Section 15.5 from CLRS, Introduction to Algorithms, 3rd.
Definitions:
- \sum p_i + \sum q_j = 1.
- E[search cost in T] = 1 + \sum depth(k_i) p_i + \sum depth(d_j) q_j.
- e[i, j] = the expected cost of searching an optimal BST containing the keys k_i, ..., k_j.
- When j = i - 1, e[i, j] = q_{i - 1} by definition. (When j = i - 1, there are no actual keys; we have just the dummy key d_{i - 1}.)
- i from 1 to n + 1.
- j from 0 to n.
- w(i, j) = \sum_{l from i to j} p_l + \sum_{w from i - 1 to j} q_l.
- i from 1 to n + 1.
- j from 0 to n.
- w[i, j] = w[i, j - 1] + p_j + q_j.
- If k_r is the root of an optimal BST, we have
- e[i, j] = p_r + (e[i, r - 1] + w(i, r - 1)) + (e[r + 1, j] + w(r + 1, j)).
- e[i, j] = e[i, r - 1] + e[r +1, j] + w(i, j)
- since w(i, j) = w(i, r - 1) + p_r + w(r + 1, j).
- e[i, j] =
- q_{i - 1} if j = i - 1.
- min_{r from i to j}{ e[i, r - 1] + e[r + 1, j] + w(i, j) }.
- Define root[i, j] be the index r for which k_r is the root of an optimal BST containing key k_i, ..., k_j.
Naive algorithm, Time = O(n^3)
- let e[1 .. n + 1, 0 .. n], w[1 .. n + 1, 0 .. n], and root[1 .. n, 1 .. n] be new tables
- for i = 1 to n + 1
- e[i, i - 1] = q_{i - 1}
- w[i, i - 1] = q_{i - 1}
- for l = 1 to n
- for i = 1 to n - l + 1
- j = i + l - 1
- e[i, j] = infinity
- w[i, j] = w[i, j - 1] + p_j + q_j
- for r = i to j
- t = e[i, r - 1] + e[r + 1, j] + w[i, j]
- if t < e[i, j]
- e[i, j] = t
- root[i, j] = r
- return e and root
Knuth Optimization, Time = O(n^2)
- Fact: root[i, j - 1] ≤ root[i, j] ≤ root[i + 1, j].
- Change the innermost for loop to for r = root[i, j - 1] to root[i + 1, j].
沒有留言:
張貼留言