Recurrences

It describes a sequence of numbers. For instance, an example of a recurrence would be: $T_1 = 1$ $T_n = T_{n-1}+1$ for n >= 2

They are useful to analyze the performance of recursive algorithms

Anti-Derivative Method

Formula is: $f'(x) = \lim_{h->0}\frac{f(x)-f(x-h)}{h}$

In terms of asymptotic analysis, in this case, we only need h to be “close” to zero but still positive. NOTE: h must be smaller than x.

Example:

  1. $F(n) = F(n-1)+n$ can be written as: $\frac{F(n)-F(n-1)}{1} = n$

1 is close to 0, so F'(n) = theta(n), from which we obtain hence F(n) = theta(n^2)

  1. $F(n) = F(n-\sqrt{n}) + n$ can be written as: $\frac{F(n)-F(n-\sqrt{n})}{\sqrt(n)} = \frac{n}{\sqrt{n}}$

sqrt(n) is close enough to zero that the left-hand side is asymptotically the derivative of F. We have F'(n) = theta(sqrt(n)). If we take the anti-derivative, then we obtain F(n) = theta(n^(3/2))

So basically, subtract F(n-1) on both sides, then divide by h on both sides. Afterward, integrate both functions to obtain the final answer.

Master Theorem

If we have an occurrence $F(n) = AF(n/B)+n^C$ where A, B, and C are constants, where A > 0, B > 1, and C >= 0:

  1. $F(n) = \theta(n^C)$ if $B^C > A$
  2. $F(n) = \theta(n^Clogn)$ if $B^C=A$
  3. $F(n) = \theta(n^{log_BA})$ if $B^C < A$

For these variables (A, B, and C) they have a meaning. $A$ = how many recursive calls are made $B$ = Factor by which the work is reduced in each recursive call. $n^C$ = how much work is done by each call

Generalized Master Theorem

Same thing as the Master Theorem, but the only difference is we change all letters (A, B, and C) into greek letters.

  1. $\alpha = A$
  2. $\beta = 1/B$
  3. $\lambda = C$

Your goal is to compute $T$, where $T = \sum_{i=1}^k\alpha_i\beta_i^\lambda$. Cases:

  1. If T = 1, then $F(n) = \theta(n^\lambda logn)$
  2. If T < 1, then $F(n) = \theta (n^\lambda)$
  3. If T > 1, then you must find some variable X, such that $\sum_{i=1}^k\alpha_i\beta_i^x = $. Once you found a variable that satisfies this, then $F(n) = \theta(n^x)$

Basically, the formula is a glorified geometric series formula. a is alpha, and r^n is basically beta^lambda

Substitution

Can use substitution to transform a recurrence into one which we can solve using one of the above methods.

Steps for Huffman’s Algorithm:

  1. Arrange/sort the frequencies from min-max (ascending order)
  2. After that is sorted, sum the two smallest numbers (typically the ones in the front) together. Keep doing this until you ONLY have one number remaining. Must keep the lowest number in the sum on the left; the biggest on the right.
  3. Once you have a single whole number, break the sum into smaller pieces and form a binary tree.
  4. After your binary tree has formed, have all the branches that are on the left of the subtrees be assigned 0; have all branches that are on the right be assigned 1.

Steps for Dijkstra’s Algorithm:

  1. Go to your source vertex and mark its adjacent edge/out-neighbors to be the shortest path.
  2. After that is marked, pop the source, then go to the source’s lowest outneighbor (is determined by looking at the value that it was assigned) and do the same thing as you did on step 1.
  3. Keep doing this until all edges have been visited.

Runtime: O(mlogn). m represents edges and n represents vertices. NO NEGATIVE EDGES ALLOWED!

Steps for Johnson’s Algorithm:

  1. Add an additional vertex and create a path of weight 0 to all existing nodes in the weighted graph.
  2. After that is established, find the shortest distance for all edges of that graph.
  3. Keep doing this until all edges have been marked with a value.
  4. Once that is established, update the paths from once connected node to another by using this formula: $New Weight = (Head Neighbor + CurrentWeight) - (Tail Neighbor)$

Apply Dijkstra’s Algorithm n times

Floyd-Warshall Algorithm

  1. Basing off the initial matrix, create another matrix and fill those according to the row and column (in this case, the separate matrix will keep values found in the first row, first column.) Also set the values in the diagonals to zero since edges can’t loop themselves.
  2. After those values are kept, then compute the rest of the values. For instance, to compute given a 4x4 matrix, you compare A[2,3] < A[2,1] + A[1,3]. Values that are less will be put inside that matrix.
  3. After those values have been calculated, then create another matrix and fill those according to the row and column (in this case, we will do the second row, second column). You will base the values off of the matrix that we made previously.
  4. Keep doing this until you have gone through all A[i,j] where i = j = n (n being the dim).
for all i and j from 1 to n:
  v[i, j] = w[i, j]
  b[i, j] = i

for all i from 1 to n:
  v[i, i] = 0;

for all j from 1 to n:
  for all i from 1 to n:
    for all k from 1 to n:
      if(v[i, j] + v[j, k] < v[i, k])
        v[i, k] = v[i, j] + v[j, k]
        back[i, k] = back[j, k]

Runtime for Floyd-Warshall is theta(n^3)

Bellman-Ford Algorithm

  1. First, prepare a list of all edges (find all possible paths from one edge to the other edge).
  2. Then pick some vertex (this will be your source vertex). Make the source vertex as 0 and mark the rest as infinity.
  3. After that, then relax all edge pairs (edge pairs are determined by if they are connected to one another).

Basically, this is the same as popping an edge in Dijkstra’s and setting a value for edge edge based on if the weights are less than infinity.

  1. Repeat steps 1 - 3 ( N - 1 ) times, where N is the number of existing edges.
  • ex: so if we have 7 edges, then we will need to do steps 1 - 3 six times.
for all i from 2 to n:
  v[i] = infinity
v[1] = 0;
finished = false;
while(not finished):
  finished = true;
  for all k from 1 to m: // iterate through all arcs
    s = s[k]
    t = t[k]
    w = w[k]
    if(v[s] + w[s, t] < v[t]):
      v[t] = v[s] + w[s, t] // update shortest path from 1 to t by concatenating shortest path from 1 to s with the arc from s to t
      finished = false;

Runtime for this algorithm O(mn) where m is the number of arcs and n is the number of edges (based on lecture, it is O(md) where d is some number).

Complex Numbers

  • Modulus: $|z| = \sqrt{a^2+b^2}$
  • Square Root Cases:
  1. If b > 0 $\sqrt{z} = \frac{+}{} \sqrt{\frac{\sqrt{a^2+b^2}+a}{2}} + \sqrt{\frac{\sqrt{a^2+b^2}-a}{2}}(i)$
  2. If b < 0 $\sqrt{z} = \frac{+}{} \sqrt{\frac{\sqrt{a^2+b^2}+a}{2}} - \sqrt{\frac{\sqrt{a^2+b^2}-a}{2}}(i)$

Roots of Unity Formula: $cos\frac{2k\pi}{n}+i\frac{2k\pi}{n}$

To find modulus and argument of $e^{c+\theta i}$: $e^z = e^a(cosb+isinb)$ where $z = a + bi$ The argument of $e^z$ is b; modulus of $e^z$ is $e^a$.

Log Star

$log^(x) = 0$ if $x <= 1$ $log^(x) = 1 + log^*(logx)$ otherwise

log* is just a pseudo-function that repeats log multiple times until x <= 1

Dynamic Programming

Way of breaking down a problem into individual sub-problems.