# CS代考程序代写 algorithm Algorithm Design COMP3027/3927 Tutorial questions The University of Sydney 2019 Semester 1 Tutorial 4 School of Computer Science

Algorithm Design COMP3027/3927 Tutorial questions The University of Sydney 2019 Semester 1 Tutorial 4 School of Computer Science

Pre-tutorial questions

Do you know the basic concepts of this week’s lecture content? These questions are only to test yourself. They will not be explicitly discussed in the tutorial, and no solutions will be given to them.

1. Divide and Conquer

(a) What is the general idea of a Divide-and-Conquer algorithm? Can you break up the general structure into three steps?

(b) Why are recursions usually needed to analyse the running time of a Divide-and-Conquer algorithm?

2. Algorithms

(a) Describe the general idea of merge sort.

(b) What are the three main steps in the algorithm for counting inversions?

3. Recurrences

(a) State the master method (write down all three cases). (b) Try to explain the master method using a recursion tree.

Tutorial

Problem 1

Solve the following recurrences:

1. T(n) = 2. T(n) = 3. T(n) = 4. T(n) = 5.T(n)= 6. T(n) = 7. T(n) =

4T(n/2) + n2

T(n/2) + 2n

16T(n/4) + n

2T(n/2)+nlogn

√

2T (n/2) + logn

3T(n/2) + n

3T(n/3) +

√

n

1

Solution: First we state the Master Theorem.

The Master Theorem applies to recurrences of the following form: T (n) = aT (n/b) + f (n) where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function. There are three cases:

1. If f(n) = O(nlogb a−ε) for some constant ε > 0, then T(n) = Θ(nlogb a).

2. If f(n) = Θ(nlogb a logk n) with k ≥ 0, then T(n) = Θ(nlogb a logk+1 n).

3. If f(n) = Ω(nlogb a+ε) with ε > 0, and f(n) satisfies the regularity condition, then T(n) = Θ(f(n)). Regularity condition: af(n/b) ≤ cf(n) for some constant c < 1 and all sufficiently large n.
1. T(n)=4T(n/2)+n2 →T(n)=Θ(n2logn)(Case2)
2. T(n) = T(n/2) + 2n → Θ(2n) (Case 3)
3. T(n) = 16T(n/4) + n → T(n) = Θ(n2) (Case 1)
4. T(n)=2T(n/2)+nlogn→T(n)=Θ(nlog2n)(Case2)
5. T(n) = √2T(n/2) + logn → T(n) = Θ(√n) (Case 1)
6. T(n)=3T(n/2)+n→T(n)=Θ(nlg3)(Case1)
7. T(n) = 3T(n/3) + √n → T(n) = Θ(n) (Case 1)
Problem 2
Consider the following algorithm.
Algorithm 1 Reverse
1: 2: 3: 4: 5: 6: 7: 8:
function reverse(A) if |A| = 1 then
return A else
Let B and C be the first and second half of A
return concatenate reverse(C) and reverse(B) end if
end function
Let T (n) be the running time of the algorithm on a instance of size n. Write down the recurrence relation for T (n) and solve it by unrolling it.
Problem 3
Theproductoftwon×nmatricesXandY isathirdn×nmatrixZ=XY,wherethe(i,j)entryofZis Zij = nk=1 Xik Ykj . Suppose that X and Y are divided into four n/2 × n/2 blocks each:
A B E F X = C D and Y = G H .
Solution: n
T(n)=2·T 2 +O(n)=O(nlogn)
2
Using this block notation we can express the product of X and Y as follows AE+BG AF+BH
In this way, one multiplication of n × n matrices can be expressed in terms of 8 multiplications and 4 additions that involve n/2×n/2 matrices. Let T(n) be the time complexity of multiplying two n×n matrices using this recursive algorithm.
1. Derive the recurrence for T (n). (Assume adding two k × k matrices takes O(k2) time.) 2. Solve the recurrence by unrolling it.
XY= CE+DG CF+DH .
Solution:
1. Breaking each matrix into four sub-matrices takes O(n2) time and so does putting together the results from the recursive call. Therefore the recurrence is
T (n) = 8T (n/2) + O(n2).
2. T(n) = O(n3). Therefore, this is no better that the standard algorithm for computing the product of two n by n matrices.
Problem 4
Your friend Alex is very excited because he has discovered a novel algorithm for sorting an array of n numbers. The algorithm makes three recursive calls on arrays of size 2n and spends only O(1) time per call.
3
Algorithm 2 New-Sort
1: 2: 3: 4: 5: 6: 7: 8: 9:
procedure new-sort(A) if |A| < 3 then
sort A directly else
new-sort(A[0], . . . , A[ 2n ]) 3
new-sort(A[ n ], . . . , A[n]) 3
new-sort(A[0], . . . , A[ 2n ]) 3
end if
end procedure
Alex thinks his breakthrough sorting algorithm is very fast but has no idea how to analyze its complexity or prove its correctness. Your task is to help Alex:
1. Find the time complexity of new-sort
2. Prove that the algorithm actually sorts the input array
3
Solution:
1. Let T (n) be the running time of the algorithm on an array of length n. Then T (n) =
2n log3 3 2.71 3T( 3 )+O(1), which is O n 2 . That’s roughly O(n
sort! Alex is heartbroken. . .
order. After Line 6 is executed we are guaranteed that these element lie in the first 2
3
). Much worse that bubble 2. The algorithm, however, is correct. Think of the bottom 1 of the elements in sorted
3
of the array and thus Line 7 correctly places them in the first 1 of the array. A similar 3
argument can be made about the top 1 of the elements in sorted order ending in the 3
1 of the array. Therefore, the middle 1 of the elements in sorted order are placed in 33
the middle 1 of the array. Within each 1 of the array the elements are sorted, so the 33
array is indeed sorted at the end of the algorithm.
Problem 5
Given an array A holding n objects, we want to test whether there is a majority element; that is, we want to know whether there is an object that appears in more than n/2 positions of A.
Assume we can test equality of two objects in O(1) time, but we cannot use a dictionary indexed by the objects. Your task is to design an O(n log n) time algorithm for solving the majority problem.
1. Show that if x is a majority element in the array then x is a majority element in the first half of the array or the second half of the array
2. Show how to check in O(n) time if a candidate element x is indeed a majority element.
3. Put these observation together to design a divide an conquer algorithm whose running time obeys the
recurrence T (n) = 2T (n/2) + O(n)
4. Solve the recurrence by unrolling it.
4
Solution:
1. If x is a majority element in the original array then, by the pigeon-hole principle, x must be a majority element in at least one of the two halves. Suppose that n is even. If x is a majority element at least n/2 + 1 entries equal x. By the pigeon hole principle either the first half or the second half must contain n/4 + 1 copies of x, which makes x a majority element within that half.
2. We scan the array counting how many entries equal x. If the count is more than n/2 we declare x to be a majority element. The algorithm scans the array and spends O(1) time per element, so O(n) time overall.
3. We break the input array A into two halves, recursively call the algorithm on each half, and then test in A if either of the elements returned by the recursive calls is indeed a majority element of A. If that is the case we return the majority element, otherwise, we report that there is “no majority element”.
To argue the correctness, we see if there is no majority element of A then the algorithm must return “no majority element” since no matter what the recursive call return, we always check if an element is indeed a majority element. Otherwise, if there is a majority element x in A we are guaranteed that one of our recursive call will identify x and the subsequent check will lead the algorithm to return x.
Regarding time complexity, breaking the main problem into the two subproblems and testing the two candidate majority elements can be done in O(n) time. Thus we get the following recurrence
4.
T (n) = 2T (n/2) + O(n). T (n) = O(n log n).
Problem 6
Suppose we are given an array A with n distinct numbers. We say an index i is locally optimal if A[i] < A[i−1] and A[i] < A[i + 1] for 0 < i < n − 1, or A[i] < A[i + 1] for if i = 0, or A[i] < A[i − 1] for i = n − 1.
Design an algorithm for finding a locally optimal index using divide and conquer. Your algorithm should run in O(log n) time.
Solution: First we test whether i = 1 or i = n are locally optimal entries. Otherwise, we knowthatA[1]>A[2]andA[n−1] A[i + 1]; in the former case we recurse on A[1,…,i] in the latter we recurse on A[i,…,n]. Either way, we reduce the size of the array by half. Rather than creating a new array each time we recurse (which would take O(n) time) we can keep the working subarray implicitly by maintaining a pair of indices (b,e) telling us that we are working with the array A[b, . . . , e]. Thus, each call demands O(1) work plus the work done by recursive calls. This leads to the following recurrence,

which solves to T (n) = O(log n).

T (n) = T (n/2) + O(1),

Problem 7

[Hard] Given two sorted lists of size m and n. Give an O(log(m + n)) time algorithm for finding the kth smallest element in the union of the two lists.

5

Solution: (Sketch.) Let A be the larger array with n elements and B be the smaller array with m elements. Let us assume for simplicity that all numbers are distinct.

Let rank(x) be the rank of x in the union of A and B. Notice that for each i we have i ≤ rank(A[i]) ≤ i + m. (Why?) Furthermore, if we compare A[i] to B[1], B[m/4], B[m/2], B[3m/4], and B[m] we can better estimate i + a ≤ rank(A[i]) ≤ i + b where b − a ≤ m/4. (Why?) Let us picture this estimate as an interval Ii = [i + a, i + b].

Now observe that if k < Ii (if k is smaller than the left endpoint of Ii) then k < rank(A[i]) so we can ignore every item larger than A[i] from the large array and search for the kth element in the smaller instance. Similarly, if k > Ii then k > rank(A[i]) so we can ignore everything smaller than A[i] from the large array and search for the (k − i)th element in the smaller instance.

The only issue is what to do if k ∈ Ii. To get around this, we find the intervals of two indices. Notice that rank(A[3n/4]) − rank(A[n/4]) ≥ n/2 and that |In/4|, |I3n/4| ≤ m/4 ≤ n/4, therefore, In/4 and I3n/4 must be disjoint and therefore either k > |In/4| or k < |I3n/4|. That means that we can always remove the elements that are either smaller or than A[n/4] or larger than A[3n/4].
For the time complexity we note that we do not really need to create a new array each time
we recurse, we can simply keep two pairs of indices, one for A and one for B, that tell us
the parts of the array where we are still searching on. Then, in each iteration we get rid of
n/4 ≥ (m + n)/8 elements in O(1) time. Therefore, the combined length of the intervals is
at most 7 their previous combined length. Thus, if we let N measure the combined length 8
of the intervals, we get the recurrence
T(N) = T(7N/8) + O(1),
which solves to T (N ) = O(log N ). Therefore, the algorithm runs in O(log(n + m)) time.
Problem 8
[Advanced] Suppose we are given a set S of n elements. Each element has a value and a weight. For an element x ∈ S, let
• S

For any subset R ⊆ S, define its weight w(R) to be the sum of the weights of elements in R. The weighted median of S is any element x such that w(S

Give an algorithm that computes the weighted median of S in O(n) time. Your input consists of two unsorted arrays V[1…n] and W[1…n] where V[i] and W[i] are the values and weights of element i, respec- tively. You may assume that the values are distinct and all weights are positive.

6

Solution: (Sketch.) This uses the same basic ideas as in the linear time median selection algorithm. First, we define the weighted analogue of the selection problem. For 0 ≤ α ≤ 1, define the weighted α-median of a set S to be the element x such that w(S

We now devise a divide-and-conquer algorithm for the weighted α-median problem. The key idea is to use the linear-time median selection algorithm to find a “pivot” that splits the input into two halves and recurse appropriately. First, we find the median in terms of value, call it x, compute the sets S

7