Unit-1

Introduction

Algorithm

It’s a finite set of instructions that are followed to accomplish a particular task. It’s a Step-by-Step process of performing a task.
Characteristics:

  1. Input: 0 or more
  2. Output: 1 or more
  3. Definiteness: Statement should be clear and unambiguous
  4. Finiteness: Must terminate at some point
  5. Effectiveness

Performance Analysis

An algorithm is efficient if it takes less time to execute and consumes less space. It’s measured in terms of Space Complexity and Time Complexity.

Space Complexity

Amount of memory required to execute an algorithm. Generally memory is required for:

  1. Instruction Space: Space required to execute
  2. Data Space: Space required to store constants and variables
  3. Environment Space

Two Types:

  1. Constant Space Complexity (Fixed)
  2. Linear Space Complexity (Variable)

Time Complexity

Amount of time required by an algorithm to complete it’s execution.
Two Types:

  1. Constant Time Complexity (Fixed)
  2. Linear Time Complexity (Variable)
Asymptotic Notations

Comparison of Functions:

1 < log(n) < sqroot(n) < n < nlog(n) < n**2 < n**3 < ... < 2**n < 3**n ... < n**n

Types:

  1. O (Big-O) - Upper Bound
  2. Ω (Omega) - Lower Bound
  3. θ (Theta) - Average Bound

O (Big-O): Function f(n) = O(g(n)) IFF there exists +ve constants c and nø, such that f(n) c * g(n) forall n belongs to nø
Ω (Omega): Function f(n) = Ω(g(n)) IFF there exists +ve constants c and nø, such that f(n) >= c * g(n) forall n belongs to nø
θ (Theta): Function f(n) = θ(g(n)) IFF there exists +ve constants c1 , c2 and , such that c1 * g(n) f(n) c2 * g(n) forall n belongs to nø

Divide and Conquer

The problem is solved by dividing the problem to 2 or more subproblems that are similar to the original problem but smaller in size. Again these subproblems are divided into smaller problems until we can not divide further.
We find solution to the problem recursively and combine to create a solution for original problem.
Steps:

  1. Divide
  2. Conquer
  3. Combine

General Algoritm:

Algorithm DC (P) {
	if P is too small {
		return solution of P
	}
	else {
		Divde(P) into small subproblems (P1, P2, ..)
		Apply DC to each subproblem
		return combine(DC(P1), DC(P2), ..)
	}
}

Stassen’s Matrix Multiplication

P = (A11 + A22) * (B11 + B22)
Q = (A21 + A22) * B11
R = A11 * (B12 - B22)
S = A22 * (B21 - B11)
T = (A11 + A12) * B22
U = (A21 - A11) * (B11 + B12)
V = (A12 - A22) * (B21 + B22)
 
C11 = P + S + T + V
C12 = R + T
C21 = Q + S
C22 = P + R - Q + U

Time Complexity:

T(n) = {
	1                 , n=2
	7T(n/2) + n**2.   , n>2
}
i.e O(n**2.81)

Merge Sort

Algorithm MergeSort(l, h) {
	if (l < h) {
		mid = (l+h) / 2
		MergeSort(l, mid)
		MergeSort(mid+1, h)
		merge(l, mid, h)
	}
}

Time Complexity:

O(nlog(n)base(2))

Quick Sort

Partition(l, r) {
	pivot = arr[r]
	p = l - 1;
	for (i = l, i < r, i++) {
		if (arr[i] <= pivot) {
			swap(++p, i)
		}
	}
	swap(++p, r)
	return p
}
 
QuickSort (l, r) {
	p = Partiton(l, r)
	QuickSort(l, p-1)
	QuickSort(p+1, r)
}

Time Complexity:

O(nlog(n)base(2))

Unit-2

Disjoint Sets

Two sets which don’t have any common elements are called disjoint sets.
Operation:

  1. Find
  2. Union

Find Operation

Finding the parent of a node

Algorithm Find(i) {
	while(parent(i) >= 0) {
		i = parent(i)
	}
	return i;
}

Union Operation

Combining both sets

Algorithm Union(i, j) {
	parent(i) = j or parent(j) = i
}

Collapsing Find

The process of directly linking a to the direct parent of a set by which we can reduce to time to find the parent of a node.

Algorithm CollapsingFind(i) {
	r = i
	while(parent(r) >= 0) {
		r = parent(r)
	}
	parent(i) = r
}

BackTracking

Back tracking uses bruteforce approach to solve problems. It explores all possible solutions. It is not for optimization, it is used to find multiple solutions. It is shown as a tree called as State Space Tree
All backtracking problems have some constraint to exploring that node.

N-Queen

Fill a chess board such that no two queens should be on the same row, column and diagonal.

Algorithm Nqueens(k, n) {
	for i = 1 to n do {
		if (Place(k, i)) {
			x[k] = i;
			if (k == n) then 
				print(x[1:n])
			else
				Nqueen(k+1, n)
		}
	}
}
 
Algorithm Place(k, i) {
	for j = 1 to k-1 do {
		if (x[j] == i) // Same Column
		or (Abs(x[j-1] == Abs(j-k))) // Same Diagonal
		return False;
	}
	return True;
}

SumOfSubsets

Finding a subset whose sum is equal to a given number

Algoritm SumofSubsets (s, k, r) {
	// initially s = 0, k = 1, r = sum of all elements
	x[k] = 1
	if (s + w[k] = m) then
		write(x[1:k]);
	elseif (s + w[k] + w[k+1] <= m) then
		SumofSubsets(s+w[k], k+1, r-w[k])
	x[k] = 0
	if ((s + r - w[k] >= m) and (s + w[k+1] <= m)) then
		SumofSubsets(s, k+1, r-w[k])
}

Graph Coloring

Color the graph such that no two adjacent vertices have the same color.
The minimum number of colors required to color the graph is called Chromatic number

Unit-3

Dynamic Programming

A dynamic programming problem is divided into number of stages at every stage an optimal decision is taken. It follows principle of Optimality.
Principle of Optimality: A problem is solved by following a sequence of decisions to get a optimal solution.

Optimal Binary Search Tree (OBST)

Frequencies of every node are given we need find the minimum cost binary search tree, for that we need to find the cost matrix[C].

We are given Q & P we need to construct 'C' matrix
 
Initially:
C(k, k) = 0
W(k, k) = Q(k)
 
C(i, j) = min {C(i, k-1) + C(k, j)} + W(i, j)      where i < k <= j
W(i, j) = W(i, j-1) + P(j) + Q(j)
 
Finally the tree is constructed based on cost matrix and 'k' value

KnapSack 0/1

We are given a set of weights, their profits and maximum weight knapsack can hold. We to find the Optimal solution to get maximum profit from choosing minimum weights in the knapsack.

We are give P (profit) and W (weight)
 
Initially:
S(0) = {(0, 0)}
 
Addition Rule: S1(i-1) = S(i-1) + (P(i), W(i))
Merging Rule: S(i) = S(i-1) + S1(i-1)
 
"We remove some cases through Dominance rule and also if weight croses maximum weight"

All Pair Shortest Path

From the given graph we first construct a adjacentcy matrix.

A(k)[i, j] = min{A(k-1)[i, j], A(k-1)[i, k] + A(k-1)[k,j]}
 
for every A(k) matrix we keep the kth row and kth column unchanged

Traveling Sales Person

Starting from any place the person has to visit every place exactly once and reach the starting place. Distance travelled should be minimum.

G(i, S) = min { C[i, k] + G(k, S-{k})}            where k belongs to S

Reliability Design

We are given number of devices and their cost and reliability including total amount to spent. We need to find maximum reliability.

First 1 unit of each device is mandatory, so substract cost of each device from total amount.
 
Calculate the upper bound (U) for each device. i.e from the given amount how many units can be bought for a sing device.
 
Initially:
S(0) = {(1, 0)}
 
if n = number of units of device then
reliabilty = 1 - (1 - r(i))**n
 
"We remove some cases through Dominance rule and also if weight croses maximum amount"

Unit-4

Greedy Method

Job Sequencing

In this we the number of jobs ‘J’ and their profits ‘P’ and deadlines ‘D’. We need solve in such a way that we get maximum profit within the deadline.

KnapSack

Here there are three ways to solve the problem.
Case 1: We start selecting the objects based on their weight from lowest to heighest without crossing the maximum weight.
Case 2: We start selecting the objects based on their profit from heighest to lowest without crossing the maximum weight.
Case 3: We start selecting the objects based on their profit per weight basis from heighest to lowest without crossing the maximum weight (we could also include partial weight). This is the best Optimal Solution.\

Algorithm GreedyKnapSack(m, n) {
	for i = 1 to n do
		x[i] = 0;
	for i = 1 to n do {
		if (w[i] > m) then break;
		x[i] = 1;
		m = m - w[i];
	}
}

Time Complexity:

For all feasible solutions: O(2**n)
For one solution: O(n)

Kruskal’s Algorithm

We find minimum cost spanning tree of a graph.
First select a minimum cost edge from graph, consequently select next minimum cost edge from graph which should not form a cycle, repeat until all vertices are covered.

Algorithm Kruskal(E, cost, n, t) {
	i = 0, mincost = 0;
	while((i < n-1)) do {
		Delete mincost edge(u, v)
		j = Find(u), k = find(v)
		if (j != k) then {
			i = i+1
			t[i, 1] = u, t[i, 2] = v;
			mincost = mincost + cost(u, v)
			union(u, v)
		}
	}
	if (i != n - 1) then write "No Spanning Tree"
	else return mincost;
}

Time Complexity

T(n) = O((n + E)log(n))

Prim’s Algorithm

We find minimum cost spanning tree of a graph.
First select a minimum cost edge from graph, consequently select next minimum cost edge selected to this edge without forming a cycle

Algorithm Prims(E, cost, n, t) {
	let (k, l) be mincost edge in E
	i = 0
	while(i <= v){
	selct next mincost edge connected to (k, l)
	i = i + 1
	make sure to not form cycles
	}
}

Time Complexity

T(n) = O(n**2)

Single Source Shortest Path (Dijkstra’s)

Used to determine shortest path from V(o) to all remaining vertices in a graph.

Algorithm Dijkstra {
	for i = 1 to n do {
		s[i] = false;
		dist[i] = cost[v, i];
	}
	s[v] = true
	dist[v] = 0
	for i = 2 to n do {
		choose u from s such that dist[u] is min
		s[u] = true
		for (each w adjacent to u with s[w] = false) do {
			if (dist[w] > dist[u] + cost[u, w]) then
				dist[w] = dist[u] + cost[u, w]
		}
	}
	
}

Time Complexity

T(n) = O((|E| + n)log(n))

Unit-5

Problems are classified into two groups:\

  1. That can be solved in polynomial time by using deterministic algorithm.
    • Searching for an element from an array
    • Sorting given array
  2. That can be solved in polynomial time by using non-deterministic algorithm
    • Knapsack problem
    • Traveling sales person problem

Deterministic Algorithm: Here every operation is uniquely defined Non-Deterministic Algorithm: Every operation may not have unique result instead they result in set of possibilities.

Types of problems:

  1. P - Problem
  2. NP - Problem
    1. NP - Complete
    2. NP - Hard