0) Exercise 17.1-3: (Ross Hacquebard) AGGREGATE ANALYSIS: If the i'th operation costs i if i is an exact power of 2, and 1 otherwise; we get: Operation: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... Cost: 1 2 1 4 1 1 1 8 1 1 1 1 1 1 1 16 1 ... We can break the cost into the following sum: Cost = (1 for each spot) - (spots that are filled by other numbers) + (powers of 2) so: Cost = (1 + 1 + ...[n times]) - (1 + 1 + ... [lg n times]) + SUM[k=0..lg n](2^k) There are n spots, but (lg n) of them are powers of 2. The SUM from [k=0 to lg] of (2^k) will give 1 + 2 + 4 + 8 + ... until we reach n or the last exact power of 2. Then it simplifies into: Cost = (n) - (lg n) + 2^( (lg n)+1 ) - 1 The sum of powers of 2 will equal twice the last power of 2, subtract 1. Think of binary numbers: if we have all '1' bits, it is equal to the next most significant bit subtract 1. Eg) 1 + 2 + 4 + 8 + 16 = 2^(4+1)-1 = 31 Eg) (in binary) 11111 = 100000-1 Finally, since 2^(lg n) = n, we know 2^((lg n) + 1) = 2n; our cost simplifies into: Cost = n - lg n + 2n - 1 < n + 2n (Account only for the most significant cost in aggregate ana.) = 3n This tells us our total cost is under 3n, for n operations. Therefore 3n/n gives us an average cost of < 3 per operation. Therefore the amortized cost per operation = O(1), by aggregate analysis. 1) Exercise 17.2-1: (Ross Hacquebard) ACCOUNTING METHOD If a stack (size never exceeds k) is copied after every k operations; we can assign the costs as follows: PUSH = \$3 (\$1 to account for the operation, \$2 for POP and COPY later) POP = \$0 (already accounted for) COPY = \$0 (already accounted for) Push cost \$1. Once the stack of size k is full, there will be \$2*k in credit. \$1*k will cover the cost of pops whenever they occur, and \$1*k will cover the cost of copying the stack. Therefore the amortized cost of each operation is O(1), and the cost of n operations (including copying the stack) is O(n). 2) Problem 17-2: (Ross Hacquebard) a) To perform a SEARCH operation for this data structure, we must search each individual array one after the other, starting at A[0] and ending at A[k-1]. Since each array is sorted, the time to search one will take O(lg x) where x is the number of elements in the array. For Arrays A[0], A[1], A[2], ... A[k-1], where k=[ceiling of]lg(n+1) the lengths are: 2^0 , 2^1, 2^2, ... 2^(k-1) In the worst case for searching, all arrays will be full, and the element won't be found: Running Time = O( lg 2^0 + lg 2^1 + lg 2^2 + ... + lg 2^(k-1) ) = O( 0 + 1 + 2 + ... + (k-1) ) = O( (k-1)*((k-1)+1) / 2 ) By the sum of natural numbers formula. = O( (k-1)*k / 2) = O( ([ceiling]lg(n+1) - 1)*[ceiling]lg(n+1) / 2 ) Since k=[ceiling]lg(n+1) = O( (lg n)*(lg n) ) b) To INSERT a new element into this data structure, we can store it as A[0] if it's empty (A[0] has a size of one). Otherwise, we can merge the new element with A[0] and store it as A[1] if it's empty (A[1] has a size of two). If A[1] is also full, we can store the new element, A[0], and A[1] into A[2] and continue this pattern until we find an empty array (or make a new array if all are full): new element -> A[0] A[0] full: new element + A[0] -> A[1] A[1] full: new element + A[0] + A[1] -> A[2] ... until we find an empty array. Since each array of size (2^i) is half the size of the next array (2^(i+1)), adding a new element plus all previous arrays will always result in an array with a size of a perfect power of two. Eg: 1 + (1) =2 1 + (1+2) =4 1 + (1+2+4) =8 1 + (1+2+...+2^i) = 2^(i+1) Therefore the data structure remains the same while inserting elements. In the worst case for inserting, all arrays are full and a new array must be created at the end of the data structure. This means that all arrays, and the new element must be merged together. The running time of merging two sorted arrays is O( a+b ) where a and b are the array sizes, which can be simplified to O( 2b ) or O( b ) if we assume b is the larger array. Using this relationship on all arrays in this data structure, we can find the running time of INSERT: Running Time = O( 2(2^0 + 2^1 + 2^2 + ... + 2^(k-1) )) There are k sorted arrays = O( 2^k) simplify the 'times 2' = O( 2^k) simplify the 'times 2' = O( 2^(lg(n+1)) ) substitute for k = O( n ) Amortized running time: Each array in this data structure will need to be merged a certain number of times. The first array, of size 1, will be merged half the time (the other half the time, the element is put directly into the array). The second array will be merged a quarter of the time. The third, an eigth of the time..etc. Array Number: 1 2 3 4 Array size: 1 2 4 8 ... Fraction of time when it is merged: 1/2 1/4 1/8 1/16 ... This means we can have at most (lg n) switches occurring for each of the n INSERTs, which gives us a total cost of O(n lg n). Per INSERT, the amortized cost is O(lg n). 3) Exercise 21.2-3: (Ross Hacquebard) Let m = number of MAKE-SET + UNION + FIND-SET operations Let n = number of MAKE-SET operations Let u = number of UNION operations This implies that: m - n - u = number of FIND-SET operations And: n = m - u Also note that: m > n > u By theorem 21.1 we get the total time bound as O(n lg n). If we substitute n for m - u, and since u < n, the total time bound is: O(m - u + u lg u) = O(m + u lg u) => This implies that the running time is at most c(m + u lg u) where c is some constant. So, the total running time (the sum of all operations) must be less than this value. Using the accounting method, we want to assign costs as follows (with labels above costs): \$MAKE-SET \$UNION \$FIND-SET CostM * n + CostU * u + CostF * (m - n - u) <= c(m + u lg u) Let K be some constant <= c, this means (to give an upper bound): \$MAKE-SET = K \$FIND-SET = K \$UNION = K*(lg n + 1) Now: K*n + K*(lg n + 1)u + K*(m - n - u) <= c(m + u lg u) K*n + K*(u lg n) + K*m - K*n - K*u <= c(m + u lg u) K*m + K*(u lg n) <= c(m + u lg u) K(m + u lg n) <= c(m + u lg u) which is true. Therefore the amortized time bounds are O(1) for MAKE-SET and FIND-SET and O(lg n) for UNION (using the linked-list representation and the weighted-union heuristic for disjoint sets). 4) Exercise 21.2-2: (Nicole Lewis) Make set: {1} {2} {3} {4} {5} {6} {7} {8} {9} {10} {11} {12} {13} {14} {15} {16} Union by 2: {1, 2} {3, 4} {5, 6} {7, 8} {9, 10} {11, 12} {13, 14} {15, 16} Union by 3: {1, 2, 3, 4} {5, 6, 7, 8} {9, 10, 11, 12} {13, 14, 15, 16} Union x1,x5: {1, 2, 3, 4, 5, 6, 7, 8} {9, 10, 11, 12} {13, 14, 15, 16} Union x11, x13: {1, 2, 3, 4, 5, 6, 7, 8} {9, 10, 11, 12, 13, 14, 15, 16} Union x1, x10: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} Find Set x2: 1 Find set x9: 1 5a) Exercise 21.3-1: (Nicole Lewis) 1 | ______|_________ | | | | 2 3 5 9 | | | 4 ___ ___________ | | | | | 6 7 10 11 13 | | | 8 12 _____ | | 14 15 | 16 FindSet X2: 1 Findset X9: 1 5b) Exercise 21.3-2: (Nicole Lewis) Non-Recursive Form: FIND-SET { if(x == p[x]){ return x; }else{ i = x; //keep original x while(i != p[i]){ // find the root i = p[i]; } while(x != i){ //change the parents to the root y=p[x]; p[x] = i; x = y; } return i; //return the root } }