CS代考 THE UNIVERSITY OF NEW SOUTH WALES – cscodehelp代写

THE UNIVERSITY OF NEW SOUTH WALES
7. DYNAMIC PROGRAMMING
Raveen de Silva,
office: K17 202
Course Admin: ,
School of Computer Science and Engineering UNSW Sydney
Term 3, 2021

Table of Contents
1. Introduction
2. Example Problems
3. Puzzle

What is dynamic programming?
The main idea is to solve a large problem recursively by building from (carefully chosen) subproblems of smaller size.
Optimal substructure property
We must choose subproblems so that the optimal solutions to subproblems can be combined into an optimal solution for the full problem.

Why is dynamic programming useful?
Recently we discussed greedy algorithms, where the problem is viewed as a sequence of stages and we consider only the locally optimal choice at each stage.
We saw that some greedy algorithms are incorrect, i.e. they fail to construct a globally optimal solution.
Also, greedy algorithms are often unhelpful for certain types of problems, such as “count the number of ways to . . . ”.
Dynamic programming can be used to efficiently consider all the options at each stage.

Why is dynamic programming efficient?
We have already seen one problem-solving paradigm that used recursion: divide-and-conquer.
D&C aims to break a large problem into disjoint subproblems, solve those subproblems recursively and recombine.
However, DP is characterised by overlapping subproblems.
Overlapping subproblems property
We must choose subproblems so that the same subproblem occurs several times in the recursion tree. When we solve a subproblem, we store the result so that subsequent instances of the same subproblem can be answered by just looking up a value in a table.

The parts of a dynamic programming algorithm
A dynamic programming algorithm consists of three parts: a definition of the subproblems;
a recurrence relation, which determines how the solutions to smaller subproblems are combined to solve a larger subproblem, and
any base cases, which are the trivial subproblems – those for which the recurrence is not required.

Putting it all together
The original problem may be one of our subproblems, or it may be solved by combining results from several subproblems, in which case we must also describe this process.
Finally, we should be aware of the time complexity of our algorithm, which is usually given by multiplying the number of subproblems by the ‘average’ time taken to solve a subproblem using the recurrence.

Table of Contents
1. Introduction
2. Example Problems
3. Puzzle

Longest Increasing Subsequence
Problem
Instance: a sequence of n real numbers A[1..n].
Task: determine a subsequence (not necessarily contiguous) of maximum length, in which the values in the subsequence are strictly increasing.

Longest Increasing Subsequence
A natural choice for the subproblems is as follows: for each 1 ≤ i ≤ n, let P(i) be the problem of determining the length of the longest increasing subsequence of A[1..i].
However, it is not clear how to relate these subproblems to each other.
A more convenient specification involves Q(i), the problem of determining opt(i), the length of the longest increasing subsequence of A[1..i] including the last element A[i].
Note that the overall solution is recovered by max{opt(i) | 1 ≤ i ≤ n}.

Longest Increasing Subsequence
We will try to solve Q(i) by extending the sequence which solves Q(j) for some j < i. One example of a greedy algorithm using this framework is as follows. Attempt 1 Solve Q(i) by extending the solution of Q(j) for j as close to i as possible, i.e. the largest j such that A[j] < A[i]. Exercise Design an instance of the problem for which this algorithm is not correct. Longest Increasing Subsequence Instead, assume we have solved all the subproblems for j < i. We now look for all indices j < i such that A[j] < A[i]. Among those we pick m so that opt(m) is maximal, and extend that sequence with A[i]. This forms the basis of our recurrence! The recurrence is not necessary if i = 1, as there are no previous indices to consider, so this is our base case. Longest Increasing Subsequence Solution Subproblems: for each 1 ≤ i ≤ n, let Q(i) be the problem of determining opt(i), the maximum length of an increasing subsequence of A[1..i] which ends with A[i]. Recurrence: for i > 1,
opt(i)=max{opt(j)|j 1,
t(i)=max{t(j)|j 0, opt(i)=min{opt(i−vk)|1≤k≤n,vk ≤i}+1.
Base case: opt(0) = 0.

Making Change
There is no extra work required to recover the overall solution; it is just opt(C).
Each of C subproblems is solved in O(n) time, so the time complexity is O(nC).
Note
Our algorithm is NOT a polynomial time algorithm in the length of the input, because C is represented by log C bits, while the running time is O(nC). There is no known polynomial time algorithm for this problem!

Making Change
Why does this produce an optimal solution for each amount i ≤ C?
Consider an optimal solution for some amount i, and say this solution includes at least one coin of denomination vm for some 1 ≤ m ≤ n.
Removing this coin must leave an optimal solution for the amount i − vm, again by our “cut-and-paste” argument.
By considering all coins of value at most i, we can pick m for which the optimal solution for amount i − vm uses the fewest coins.

Making Change
Suppose we were required to also determine the exact number of each coin required to make change for amount C.
In the ith slot of the table, we would store both opt(i) and the index m which minimises opt(i − vm).
We could then reconstruct the optimal solution by finding m1 stored in slot i, then finding m2 stored in slot i − vm1, then look at m3 stored in slot i −vm1 −vm2, and so on.
Notation
We denote the m that minimises opt(i − vm) by
argmin opt(i − vm ). 1≤m≤n

Integer Knapsack Problem (Duplicate Items Allowed)
Problem
Instance: You have n types of items; all items of kind i are identical and of weight wi and value vi . You can take any number of items of each kind. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items Allowed)
Similarly to the previous problem, we solve for each total weight up to C.
Assume we have solved the problem for all total weights j < i. We now consider each type of item, the kth of which has weight wk . If this item is included, we would fill the remaining weight with the already computed optimal solution for i − wk . We choose the m which maximises the total value of the optimal solution for i − wm plus an item of type m, to obtain a packing of total weight i of the highest possible value. Integer Knapsack Problem (Duplicate Items Allowed) Solution Subproblems: for each 0 ≤ i ≤ C, let P(i) be the problem of determining opt(i), the maximum value that can be achieved using exactly i units of weight, and m(i), the type of an item in such a collection. Recurrence: for i > 0,
m(i)= argmax (opt(i−wk)+vk)
1≤k ≤n,wk ≤i
opt(i) = opt 􏰵i − wm(i)􏰶 + vm(i).
Base case: opt(0) = 0, m(0) undefined.

Integer Knapsack Problem (Duplicate Items Allowed)
The overall solution is max{opt(i) | 0 ≤ i ≤ C}, as the optimal knapsack can hold up to C units of weight.
Each of C subproblems is solved in O(n), for a time complexity of O(nC).
Again, our algorithm is NOT polynomial in the length of the input.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Problem
Instance: You have n items (some of which can be identical), the ith of which has weight wi and value vi. You also have a knapsack of capacity C.
Task: Choose a combination of available items which all fit in the knapsack and whose value is as large as possible.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
Let’s use the same subproblems as before, and try to develop a recurrence.
Question
If we know the optimal solution for each total weight j < i, can we deduce the optimal solution for weight i? Answer No! If we begin our solution for weight i with item k, we have i − wk remaining weight to fill. However, we did not record whether item k was itself used in the optimal solution for that weight. Integer Knapsack Problem (Duplicate Items NOT Allowed) At this point you may object that we can in fact reconstruct the optimal solution for total weight i − wk by storing the values m(i) as we did earlier, and then backtracking. This unfortunately has two flaws: 1. the optimal solution for i − wk is not necessarily unique, so we may have recorded a selection including item k when a selection of equal value without item k also exists, and 2. if all optimal solutions for i − wk use item k, it is still possible that the best solution for i combines item k with some suboptimal solution for i − wk . The underlying issue is that with this choice of subproblems, this problem does not have the optimal substructure property. Integer Knapsack Problem (Duplicate Items NOT Allowed) When we are unable to form a correct recurrence between our chosen subproblems, this usually indicates that the subproblem specification is inadequate. What extra parameters are required in our subproblem specification? We would like to know the optimal solution for each weight without using item k. Directly adding this information to the subproblem specification still doesn’t lead to a useful recurrence. How could we capture it less directly? Integer Knapsack Problem (Duplicate Items NOT Allowed) For each total weight i, we will find the optimal solution using only the first k items. We can take cases on whether item k is used in the solution: if so, we have i − wk remaining weight to fill using the first k − 1 items, and otherwise, we must fill all i units of weight with the first k − 1 items. Integer Knapsack Problem (Duplicate Items NOT Allowed) Solution Subproblems: for 0 ≤ i ≤ C and 0 ≤ k ≤ n, let P(i,k) be the problem of determining opt(i,k), the maximum value that can be achieved using up to i units of weight and using only the first k items, and m(i,k), the (largest) index of an item in such a collection. Recurrence: for i > 0 and 1 ≤ k ≤ n,
opt(i , k ) = max (opt(i , k − 1), opt(i − wk , k − 1) + vk ) ,
withm(i,k)=m(i,k−1)inthefirstcaseandk inthesecond.
Base cases: if i = 0 or k = 0, then opt(i,k) = 0 and m(i,k) is undefined.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
We need to be careful about the order in which we solve the subproblems.
When we get to P(i,k), the recurrence requires us to have already solved P(i,k −1) and P(i −wk,k −1).
This is guaranteed if we subproblems P(i,k) in increasing order of k, then increasing order of capacity i.

Integer Knapsack Problem (Duplicate Items NOT Allowed)
0 i−wk
i C
0
k−1 k
n
The overall solution is opt(C,n).
Each of O(nC) subproblems is solved in constant time, for a time complexity of O(nC).

Balanced Partition
Problem
Instance: a set of n positive integers xi .
Task: partition these integers into two subsets S1 and S2 with sums Σ1 and Σ2 respectively, so as to minimise |Σ1 − Σ2|.

Balanced Partition
Suppose without loss of generality that Σ1 ≥ Σ2.
Let Σ = x1 + . . . + xn, the sum of all integers in the set.
Observe that Σ1 + Σ2 = Σ, which is a constant, and upon rearranging it follows that
􏰹Σ 􏰺 Σ1−Σ2=2 2−Σ2 .
So, all we have to do is find a subset S2 of these numbers with total sum as close to Σ/2 as possible, but not exceeding it.

Balanced Partition
For each integer xi in the set, construct an item with both weight and value equal to xi .
Consider the knapsack problem (with duplicate items not allowed), with items as specified above and knapsack capacity Σ/2.
Solution
The best packing of this knapsack produces an optimally balanced partition, with set S1 given by the items outside the knapsack and set S2 given by the items in the knapsack.

Assembly line scheduling
Problem
Instance: You are given two assembly lines, each consisting of n workstations. The kth workstation on each assembly line performs the kth of n jobs.
To bring a new product to the start of assembly line i takes si units of time.
To retrieve a finished product from the end of assembly line i takes fi units of time.

Assembly line scheduling
Problem (continued)
On assembly line i , the k th job takes ai ,k units of time to complete.
To move the product from station k on assembly line i to station k + 1 on the other line takes ti ,k units of time.
There is no time required to continue from station k to station k + 1 on the same line.
Task: Find a fastest way to assemble a product using both lines as necessary.

Assembly line scheduling
Dynamic Programming: Example 1: Assembly line scheduling. Instance:
a11
a21
a12
t11 t12
a1(k-1)
a2(k-1)
t1(k-1)
t2(k-1)
a1k
a2k
t1k
t2k
a1(n-1)
a2(n-1)
t1(n-1)
t2(n-1)
a1n
a2n
x1
x2
e1 e2
t21
t22
a2 1

Solution 1: brute force search calculates times for 2n Assembly line scheduling
many possible paths, and compares them.
Solution 2: recursion: a1. k-1
t1, k-2
a1, k t1, k-1
t1, k
t2, k
t2, k-2
a2, k-1
t2, k-1
a2, k

Assembly line scheduling
The natural choice of subproblems is to have one corresponding to each workstation.
To form a recurrence, we should consider the ways of getting to workstation k on assembly line i.
We could have come from workstation k − 1 on either line, after completing the previous job.
The exception is the first workstation, which leads to the base case.

Assembly line scheduling
Solution
Subproblems: for i ∈ {1,2} and 1 ≤ k ≤ n, let P(i,k) be the problem of determining opt(i,k), the minimal time taken to complete the first k jobs, with the kth job performed on assembly line i.
Recurrence: for k > 1,
opt(1, k) = min (opt(1, k − 1), opt(2, k − 1) + t2,k−1) + a1,k
opt(2, k ) = min (opt(2, k − 1), opt(1, k − 1) + t1,k −1 ) + a2,k . Base cases: opt(1, 1) = s1 + a1,1 and opt(2, 1) = s2 + a2,1.

Assembly line scheduling
As the recurrence uses values from both assembly lines, we have to solve the subproblems in order of increasing k, solving both P(1, k) and P(2, k) at each stage.
Finally, after obtaining opt(1, n) and opt(2, n), the overall solution is given by
min (opt(1, n) + f1, opt(2, n) + f2) .
Each of 2n subproblems is solved in constant time, and the final two subproblems are combined as above in constant time also. Therefore the overall time complexity is O(n).

Assembly line scheduling
Remark
This problem is important because it has the same design logic as the Viterbi algorithm, an extremely important algorithm for many fields such as speech recognition, decoding convolutional codes in telecommunications etc. This will be covered in COMP4121 Advanced and Parallel Algorithms.

Matrix chain multiplication
Let A and B be matrices. The matrix product AB exists if A has as many columns as B has rows.
If A is m × n and B is n × p, then AB is an m × p matrix, each element of which is a dot product of two vectors of length n, so m × n × p multiplications are required.
Matrix multiplication is associative, that is, for any three matrices of compatible sizes we have A(BC) = (AB)C.
However, the number of real number multiplications needed to obtain the product can be very different.

Matrix chain multiplication
Suppose A is 10×100, B is 100×5 and C is 5×50. (AB )(C )
(A)(B )(C )
(ABC )
􏱂􏱁􏱀􏱃
10×5
10×100×5 10×5×50
100×5×50 10×100×50 (A) (BC)
􏱂􏱁􏱀􏱃
100×50
To evaluate (AB)C we need 7500 multiplications, whereas to evaluate A(BC) we need 75000 multiplications!

Matrix chain multiplication
Problem
Instance: a compatible sequence of matrices A1A2…An, where Ai is of dimension si−1 ×si.
Task: group them in such a way as to minimise the total number of multiplications needed to find the product matrix.

Matrix chain multiplication
How many groupings are there?
The total number of different groupings satisfies the following recurrence (why?):
n−1
T (n) = 􏰾 T (i )T (n − i ),
i=1 with base case T (1) = 1.
One can show that the solution satisfies T(n) = Ω(2n).
Thus, we cannot efficiently do an exhaustive search for the optimal grouping.

Matrix chain multiplication
Note
The number of groupings T(n) is a very famous sequence: the Catalan numbers. This sequence answers many seemingly unrelated combinatorial problems, including:
the number of balanced bracket sequences of length 2n; the number of full binary trees with n + 1 leaves;
the number of lattice paths from (0,0) to (n,n) which never go above the diagonal;
the number of noncrossing partitions of an n + 2-sided convex polygon;
the number of permutations of {1,…,n} with no three-term increasing subsequence.

Matrix chain multiplication
Instead, we try dynamic programming. A first attempt might be to specify subproblems corresponding to prefixes of the matrix chain, that is, find the optimal grouping for
A1A2 …Ai.
This is not enough to construct a recurrence; consider for example splitting the chain as
(A1A2 …Aj)(Aj+1Aj+2 …Ai).

Matrix chain multiplication
Instead we should specify a subproblem corresponding to each contiguous subsequence Ai+1Ai+2 …Aj of the chain.
The recurrence will consider all possible ways to place the outermost multiplication, splitting the chain into the product
(Ai+1 …Ak)(Ak+1 …Aj). 􏱂 􏱁􏱀 􏱃􏱂 􏱁􏱀 􏱃
si ×sk sk ×sj
No recursion is necessary for subsequences of length one.

Matrix chain multiplication
Solution
Subproblems: for all 0 ≤ i < j ≤ n, let P(i,j) be the problem of determining opt(i,j), the fewest multiplications needed to compute the product Ai+1Ai+2 ...Aj. Recurrence: for all j − i > 1, opt(i,j)=min{opt(i,k)+sisksj +opt(k,j)|i 0,
􏰿opt(i−1,j−1)+1
max(opt(i − 1, j ), opt(i , j − 1))
ifai =bj otherwise.
opt(i,j) =
Base cases: for all 0 ≤ i ≤ n, opt(i,0) = 0, and for all 0 ≤ j ≤ n, opt(0,j) = 0.

Longest Common Subsequence
Iterating through the subproblems P(i,j) in lexicographic order (increasing i, then increasing j) guarantees that P(i − 1, j) and P(i, j − 1) are solved before P(i, j), so all dependencies are satisfied.
The overall solution is opt(n, m).
Each of O(nm) subproblems is solved in constant time, for an
overall time complexity of O(nm).
To reconstruct the longest common subsequence itself, we can record the direction from which the value opt(i,j) was obtained in the table, and backtrack.

Longest Common Subsequence
!
!!

Longest Common Subsequence
What if we have to find a longest common subsequence of three sequences S, S∗, S∗∗?
Question
Can we do LCS (LCS (S, S∗) , S∗∗)?
Answer
Not necessarily!

Longest Common Subsequence
Let S = ABCDEGG, S∗ = ACBEEFG and S∗∗ = ACCEDGF. Then
However,
LCS(S,S∗,S∗∗) = ACEG.
LCS (LCS (S, S∗) , S∗∗)
= LCS(LCS(ABCDEGG,ACBEEFG),S∗∗) = LCS (ABEG , ACCEDGF )
= AEG.
Exercise
Confirm that LCS (LCS (S∗, S∗∗) , S) and LCS (LCS (S, S∗∗) , S∗) also give wrong answers.

Longest Common Subsequence
Problem
Instance: three sequences S = ⟨a1, a2, . . . an⟩, S∗ = ⟨b1,b2,…,bm⟩ and S∗∗ = ⟨c1,c2,…,cl⟩.
Task: find the length of a longest common subsequence of S, S∗ and S∗∗.

Longest Common Subsequence
Solution
Subproblems: for all 0 ≤ i ≤ n, all 0 ≤ j ≤ m and all 0 ≤ k ≤ l, let P(i, j, k) be the length of the longest common subsequence of the truncated sequences Si = ⟨ai,a2,…,ai⟩, Sj∗ = ⟨b1,b2,…,bj⟩ and S∗∗ = ⟨c1,c2,…,ck⟩.
k
Recurrence: for all i, j, k > 0, 
opt(i,j,k) =
opt(i−1,j−1,k−1)+1 max(opt(i − 1, j , k ), opt(i , j −
ifai =bj =ck otherwise.
1, k), opt(i, j, k − 1))
Base cases: if i = 0, j = 0 or k = 0, opt(i,j,k) = 0.

Shortest Common Supersequence
Problem
Instance: two sequences s = ⟨a1, a2, . . . an⟩ and s∗ = ⟨b1,b2,…,bm⟩.
Task: find a shortest common supersequence S of s,s∗, i.e., a shortest possible sequence S such that both s and s∗ are subsequences of S.

Shortest Common Supersequence
Solution: Find a longest common subsequence LCS(s,s∗) of s and s∗, then add back the differing elements of the two sequences in the right places, in any compatible order.
For example, if
s = abacada and s∗ = xbycazd,
then
and therefore
LCS(s, s∗) = bcad SCS(s, s∗) = axbyacazda.

Edit Distance
Problem
Instance: Given two text strings A of length n and B of length m, you want to transform A into B. You are allowed to insert a character, delete a character and to replace a character with another one. An insertion costs cI , a deletion costs cD and a replacement costs cR.
Task: find the lowest total cost transformation of A into B.

Edit Distance
Edit distance is another measure of the similarity of pairs of strings.
Note: if all operations have a unit cost, then you are looking for the minimal number of such operations required to transform A into B; this number is called the edit distance between A and B.
If the sequences are sequences of DNA bases and the costs reflect the probabilities of the corresponding mutations, then the minimal cost represents how closely related the two sequences are.

Edit Distance
Again we consider prefixes of both strings, say A[1..i] and B [1..j ].
We have the following options to transform A[1..i] into B [1..j ]:
1. delete A[i] and then transform A[1..i − 1] into B[1..j];
2. transform A[1..i] to B[1..j − 1] and then append B[j];
3. transform A[1..i − 1] to B[1..j − 1] and if necessary replace A[i] by B[j].
If i = 0 or j = 0, we only insert or delete respectively.

Edit Distance
Solution
Subproblems: for all 0 ≤ i ≤ n and 0 ≤ j ≤ m, let P(i,j) be the problem of determining opt(i,j), the minimum cost of transforming the sequence A[1..i] into the sequence B[1..j].
Recurrence: for i , j ≥ 1,
opt(i −1,j)+c
 D
opt(i, j − 1) + cI
opt(i,j)=min 􏰿opt(i−1,j−1) ifA[i]=B[j]
 opt(i−1,j−1)+cR ifA[i]̸=B[j]. Base cases: opt(i, 0) = icD and opt(0, j) = jcI .

Edit Distance
The overall solution is opt(n, m).
Each of O(nm) subproblems is solved in constant time, for a total time complexity of O(nm).

Maximising an expression
Problem
Instance: a sequence of numbers with operations +, −, × in between, for example
1+2−3×6−1−2×3−5×7+2−8×9.
Task: place brackets in a way that the resulting expression has the largest possible value.

Maximising an expression
What will be the subproblems?
Similar to the matrix chain multiplication problem earlier, it’s
not enough to just solve for prefixes A[1..i].
Maybe for a subsequence of numbers A[i + 1..j] place the brackets so that the resulting expression is maximised?

Maximising an expression
How about the recurrence?
It is natural to break down A[i + 1..j ] into
A[i + 1..k ] ⊙ A[k + 1..j ], with cases ⊙ = +, −, ×.
In the case ⊙ = +, we want to maximise the values over both
A[i + 1..k] and A[k + 1..j].
This doesn’t work for the other two operations!
We should look for placements of brackets not only for the maximal value but also for the minimal value!

Maximising an expression
Exercise
Write a complete solution for this problem. Your solution should include the subproblem specification, recurrence and base cases. You should also describe how the overall solution is to be obtained, and analyse the time complexity of the algorithm.

Turtle Tower
Problem
Instance: You are given n turtles, and for each turtle you are given its weight and its strength. The strength of a turtle is the maximal weight you can put on it without cracking its shell.
Task: find the largest possible number of turtles which you can stack one on top of the other, without cracking any turtle.
Hint
Order turtles in an increasing order of the sum of their weight and their strength, and proceed by recursion.

Single Source Shortest Paths
Problem
Instance: a directed weighted graph G = (V , E ) with edge weights w(e) which can be negative, but without cycles of negative total weight, and a vertex s ∈ V .
Task: find the weight of the shortest path from vertex s to every other vertex t.

Single Source Shortest Paths
How does this problem differ to the one solved by Dijkstra’s algorithm?
In this problem, we allow negative edge weights, so the greedy strategy no longer works.
Note that we disallow cycles of negative total weight. This is only because with such a cycle, there is no shortest path – you can take as many laps around a negative cycle as you like.
This problem was first solved by Shimbel in 1955, and was one of the earliest uses of Dynamic Programming.

Bellman-Ford algorithm
Observation
For any vertex t, there is a shortest s − t path without cycles.
Proof Outline
Suppose the opposite. Let p be a shortest s − t path, so it must contain a cycle. Since there are no negative weight cycles, removing this cycle produces an s − t path of no greater length.
Observation
It follows that every shortest s − t path contains any vertex v at most once, and therefore has at most |V | − 1 edges.

Bellman-Ford algorithm
For every vertex t, let’s find the weight of a shortest s − t path consisting of at most i edges, for each i up to |V | − 1.
Suppose the path in question is
p = s → … → v → t,
􏱂 􏱁􏱀 􏱃
p′
with the final edge going from v to t.
Then p′ must be itself the shortest path from s to v of at most i − 1 edges, which is another subproblem!
No such recursion is necessary if t = s, or if i = 0.

Bellman-Ford algorithm
Solution
Subproblems: forall0≤i≤|V|−1andallt∈V,letP(i,t)be the problem of determining opt(i,t), the length of a shortest path from s to t which contains at most i edges.
Recurrence: for all i > 0 and t ̸= s,
opt(i , t ) = min{opt(i − 1, v ) + w (v , t ) | (v , t ) ∈ E }.
Base cases: opt(i,s) = 0, and for t ̸= s, opt(0,t) = ∞.

Bellman-Ford algorithm
The overall solutions are given by opt(|V | − 1, t ).
We proceed in |V | rounds (i = 0, 1, . . . , |V | − 1). In each
round, each edge of the graph is considered only once.
Therefore the time complexity is O (|V | × |E |).
This method is sometimes called “relaxation”, because we progressively relax the additional constraint on how many edges the shortest paths can contain.

Bellman-Ford algorithm
How do we reconstruct an actual shortest s − t path?
As usual, we’ll store one step at a time and backtrack. Let pred(i,t) be the immediate predecessor of vertex t on a shortest s − t path of at most i edges.
The additional recurrence required is
pred(i , t ) = argmin{opt(i − 1, v ) + w (v , t ) | (v , t ) ∈ E }. v

Bellman-Ford algorithm
There are several small improvements that can be made to this algorithm.
As stated, we build a table of size O(|V|2), with a new row for each ‘round’.
It is possible to reduce this to O(|V |). Including opt(i − 1, t) as a candidate for opt(i,t), doesn’t change the recurrence, so we can instead maintain a table with only one row, and overwrite it at each round.

Bellman-Ford algorithm
The SPFA (Shortest Paths Faster Algorithm) speeds up the later rounds by ignoring some edges. This optimisation and others (e.g. early exit) do not change the worst case time complexity from O(|V| × |E|), but they can be a significant speed-up in practical applications.
The Bellman-Ford algorithm can also be augmented to detect cycles of negative weight.

All Pairs Shortest Paths
Problem
Instance: a directed weighted graph G = (V , E ) with edge weights w(e) which can be negative, but without cycles of negative total weight.
Task: find the weight of the shortest path from every vertex s to every other vertex t.

Floyd-Warshall algorithm
We can use a similar idea, this time in terms of the intermediate vertices allowed on an s − t path.
Label the vertices of V as v1,v2,…,vn, where n = |V|.
Let S be the set of vertices allowed as intermediate vertices. Initially S is empty, and we add vertices v1, v2, . . . , vn one at a time.

Floyd-Warshall algorithm
Question
When is the shortest path from s to t using the first k vertices as intermediates actually an improvement on the value from the previous round?
Answer
When there is a shorter path of the form
s→ … →vk→ … →t. 􏱂􏱁􏱀􏱃 􏱂􏱁􏱀􏱃
v1 ,…,vk −1 v1 ,…,vk −1

Floyd-Warshall algorithm
Solution
Subproblems: forall1≤i,j≤nand0≤k≤n,letP(i,j,k)be the problem of determining opt(i,j,k), the weight of a shortest pathfromvi tovj usingonlyv1,…,vk asintermediatevertices.
Recurrence: for all 1 ≤ i,j,k ≤ n,
opt(i,j,k) = min(opt(i,j,k −1),opt(i,k,k −1)+opt(k,j,k −1)). Base cases:
0 if i = j

opt(i,j,0) = w(i,j)
∞ otherwise.
if (vi,vj) ∈ E .

Floyd-Warshall algorithm
The overall solutions are given by opt(i,j,n), where all vertices are allowed as intermediates.
Each of O(|V|3) subproblems is solved in constant time, so the time complexity is O(|V |3).
The space complexity can again be improved by overwriting the table every round.

Integer Partitions
Problem
Instance: a positive integer n.
Task: compute the number of partitions of n, i.e., the number of distinct multisets of positive integers {n1, . . . , nk } such that n1+…+nk =n.

Integer Partitions
Hint
It’s not obvious how to construct a recurrence between the number of partitions of different values of n. Instead consider restricted partitions!
Let nump(i,j) denote the number of partitions of j in which no part exceeds i, so that the answer is nump(n,n).
The recursion is based on relaxation of the allowed size i of the parts of j for all j up to n. It distinguishes those partitions where all parts are ≤ i − 1 and those where at least one part is exactly i .

Table of Contents
1. Introduction
2. Example Problems
3. Puzzle

PUZZLE
You have 2 lengths of fuse that are guaranteed to burn for precisely 1 hour each. Other than that fact, you know nothing; they may burn at different (indeed, at variable) rates, they may be of different lengths, thicknesses, materials, etc.
How can you use these two fuses to time a 45 minute interval?

That’s All, Folks!!

Leave a Reply

Your email address will not be published. Required fields are marked *