Processing math: 100%

Aryaman · I wasn't

CS 213


Lecture 6
int binary_search(int a[SIZE], int elm, int low, int high)  
{  
    //test if array is empty  
    int mid;  
    if(high < low)  
        return -1;  
    else  
    {  
        //calculate midpoint to cut set in half  
        if(a[mid] > elm)  
            return binary_search(a, elm, low, mid-1);  
        else if(a[mid] < elm)  
            return binary_search(a, elm, mid+1, high);  
    };  
    return mid;  
}  

Complexity

T(n)=T(n/2)+c


Largest contiguous positive sum in an array of reals

(Check code in slides)
max2left, max2right, and maxcrossing are the main deals.

Complexity

T(n)=2T(n/2)+cn

2T(n/2) → two recursive calls
cn → from the two loops of n/2



Merge sort

Divided the array into two arrays of (approximately) same size, sort them and them interleave them together.

Complexity

T(n)=2T(n/2)+n1

If we have two (sorted) arrays of n elements, what is the cost of making the sorted interleaved array?
A={a1,a2,...,an}, sorted
B={b1,b2,...,bn}, sorted

To make interleaved S, we first compare a1 and b1. Make the smaller one first, then compare a2 with b1 or a1 with b2, as the case may be.
Continue this.
This will require ~ n comparisons. Will take cn.
The divide part is 2T(n/2).


Fast multiplication of integers

Let a and b be two 2n-bits integers, a=(a2n1,a2n2,...,a1,a0)2 and b=(b2n1,b2n2,...,b1,b0)2.

A1:=(a2n1,a2n2,...,an)2
A2:=(an1,an2,...,a0)2
B1:=(b2n1,b2n2,...,bn)2
B2:=(bn1,bn2,...,b0)2

A=2nA1+A2
B=2nB1+B2

AB=22nA1B1+2n(A1B2+A2B1)+A2B2()

Now we have four multiplications of n-bit integers, three additions of n-bit integers and two shift operations. (The multiplication with the powers of 2. Multiplying by 2n is the same as shifting («) n bits. Cheap as hecc.)

Using () as the divide and conquer strategy, we get:
f(2n)=4f(n)+5n or equivalently, f(n)=4f(n/2)+5(n/2).


Matrix multiplication

Assume that A and B are square matrices of size 2n.
Write them as:

A=[A1A2A3A4] and B=[B1B2B3B4].

Where Ai and Bi are the submatrices of size 2n1.

Their product is:

[A1B1+A2B3A1B2+A2B4A3B1+A4B3A3B2+A4B4]

Thus, we have reduced (divided) the multiplication problem into multiplication of smaller submatrices.

Complexity

T(n)=T(n/2)+O(n2)
O(n2) is for additions.


The other thing is what has been put in slides.


Aim (expectation) of lecture: Skill to formulate the recursive relation for a given algorithm.

Solving is what will come next. (Maybe not full solution in class. :/)



Solving recursive

f(n)=af(n/b)+g(n)

a → Number of subproblems
n/b → size of each subproblem
g(n) → conquer step

f(n)=a2f(n/b2)+ag(n/b)+g(n)
.
.
.
f(n)=akf(n/bk)+k1i=0aig(n/bi)

Assume n=bk. (Can work around this.)
Then, we get
f(n)=akf(1)+k1i=0aig(n/bi)
The last summation is usually a pain to deal with.

Special case

g(n)=c (constant)

Also assume that a=1 (Binary search type.)
Then, f(n)=f(1)+ck
Recall that k=logbn. Thus,

f(n)=O(logn).

Now, let us assume a>1. (g still constant)

Then, f(n)=akf(1)+c(ak1a1)=c1ak+c2ak. Note that ak=alogbn and thus,

f(n)=O(nlogba).

Case that g is linear

f(n)=af(n/b)+cn
f(n)=akf(1)+cn(1+(ab)++(ab)k1)

Let us take the case that a = b = 2.
f(n)=c12k+c2knc1n+c2kn
Both the terms have an n term. kn will dominate the whole thing giving

f(n)=O(nlogn).

Master Theorem

Let f be an increasing function satisfying f(n)=af(n/b)+cnd
where a1, b>1, c>0 and d0.
Then the time complexity is given by:

O(nd);a<bd
O(ndlogbn);a=bd
O(nlogba);a>bd


Strassen’s scheme for matrix multiplication:
T(n)=7T(n/2)+cn2.
Thus, T(n)=O(nlog27).



Linked List

This is the next thing to do.
We’ll do pointers.

Motivation

Consider p(x)=a1x200+a2a100+a3.
Very few nonzero coefficients but degree high.
If we use the dense representation, then it is quite bad. (Storing all the coefficients in an array of 201 elements.)

We’ll now describe the polynomial using a collection of tuples.
[a1, 200], [a2, 100], [a3, 0]

Note that in dense, we didn’t have to pair up the degree as that was done implicitly.


Now, if I have to add, subtract, yada yada, what will we do in sparse representation?
Linked lists will help us here!
We don’t need to know a priori what the size of polynomial is. In fact, the coefficients need not even be given in order.

We need the following stuff:
new() delete()
→ pointers
Note that the data stored here need not be contiguous
class (keeps track of where’s the next data)

We should be able to define a class in C++. Not as professional as STL classes maybe but still nice.



Words from Samad:

→ Most of us have done fine. (In regards to tutorial 1.)
→ He wants us to focus on time analysis.
→ We shouldn’t care about how small stuff. Take every constant stuff as 1 unit. Care only asymptotic stuff.
→ Assignment 1 today evening. Next week deadline.
→ Easy to copy from Internet but easy to catch also. :(
→ Don’t copy. You won’t be able to get away.
→ We’ll get caught in the end. People got screwed in ICPC for plagiarism.
→ Prof: “I care only about your effort, not clean code.”

Prev | Next

CS 213