Daa Module 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

MODULE V

Analysis, Comparison of Divide and Conquer and Dynamic Programming strategies


Greedy Strategy: - The Control Abstraction- the Fractional Knapsack Problem,
Minimal Cost Spanning Tree Computation- Prim’s Algorithm – Kruskal’s Algorithm.
Divide & Conquer

1. The divide-and-conquer paradigm involves three steps at each level of the recursion:
• Divide the problem into a number of sub problems.
• Conquer the sub problems by solving them recursively. If the sub problem sizes are small
enough, however, just solve the sub problems in a straightforward manner.
• Combine the solutions to the sub problems into the solution for the original problem.

2. They call themselves recursively one or more times to deal with closely related sub
problems.

3. D&C does more work on the sub-problems and hence has more time consumption.

4. In D&C the sub problems are independent of each other.

5. Example: Merge Sort, Binary Search

Dynamic Programming

1. The development of a dynamic-programming algorithm can be broken into a sequence of


four steps.a. Characterize the structure of an optimal solution.b. Recursively define the value
of an optimal solution. c. Compute the value of an optimal solution in a bottom-up fashion.d.
Construct an optimal solution from computed information

2. Dynamic Programming is not recursive.

3. DP solves the sub problems only once and then stores it in the table.

4. In DP the sub-problems are not independent.

5. Example : Matrix chain multiplication

The Principle of Optimality


To use dynamic programming the problem must observe the principle of optimality, that
whatever the initial state is, remaining decisions must be optimal with regard the state
following from the first decision

When developing a dynamic-programming algorithm, we follow a sequence of four steps:


1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.

Greedy Algorithm
A greedy algorithm is an algorithmic strategy that makes the best optimal choice at each
small stage with the goal of this eventually leading to a globally optimum solution. This
means that the algorithm picks the best solution at the moment without regard for
consequences. It picks the best immediate output, but does not consider the big picture, hence
it is considered greedy.
Components of Greedy Algorithm
Greedy algorithms have the following five components −
 A candidate set − A solution is created from this set.
 A selection function − Used to choose the best candidate to be added to the solution.
 A feasibility function − Used to determine whether a candidate can be used to
contribute to the solution.
 An objective function − Used to assign a value to a solution or a partial solution.
 A solution function − Used to indicate whether a complete solution has been
reached.

General method:
-Given n inputs choose a subset that satisfies some constraints.
– A subset that satisfies the constraints is called a feasible solution.
– A feasible solution that maximises or minimises a given (objective) function is said to be
optimal.
Often it is easy to find a feasible solution but difficult to find the optimal solution.
The greedy method suggests that one can devise an algorithm that works in stage. At each
stage a decision is made whether a particular input is in the optimal solution. This is called
subset paradigm.
Control Abstraction for Greedy algorithm Algorithm
Greedy(A : set; n : integer){
MakeEmpty(solution);
f or(i = 2;i <= n;i + +){
x = Select(A);
if F easible(solution, x) then
solution = Union(solution, {x})
}
return solution
}
The function Greedy describes the essential way that a greedy algorithm will look, once a
particular problem is chosen and the functions
Select, F easible and Union are properly implemented. The function Select selects an input
from A whose value is assign to x. F easible is a Booleanvalued function that determines if x
can be included into the solution vector. The function Union combines x with the solution,
and update the objective function.

Fractional Knapsack Problem


Given a set of items, each with a weight and a value, determine a subset of items to include
in a collection so that the total weight is less than or equal to a given limit and the total value
is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a subproblem
in many, more complex mathematical models of real-world problems. One general approach
to difficult problems is to identify the most restrictive constraint, ignore the others, solve a
knapsack problem, and somehow adjust the solution to satisfy the ignored constraints.
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of
items.
According to the problem statement,
 There are n items in the store
th
 Weight of i item wi>0
th
 Profit for i item pi>0 and
 Capacity of the Knapsack is W
In this version of Knapsack problem, items can be broken into smaller pieces. So, the thief
may take only a fraction xi of ith item.
0⩽xi⩽1
The ith item contributes the weight xi.wixi.wi to the total weight in the knapsack and
profit xi.pixi.pi to the total profit.
Hence, the objective of this algorithm is to

subject to constraint,
It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a
fraction of one of the remaining items and increase the overall profit.
Thus, an optimal solution can be obtained by

In this context, first we need to sort those items according to the value of pi/wi, so
that pi+1/wi+1 ≤ pi/wi . Here, x is an array to store the fraction of items.

Greedy-fractional-knapsack (w, v, W)
FOR i =1 to n
do x[i] =0
weight = 0
while weight < W
do i = best remaining item
IF weight + w[i] ≤ W
then x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
return x

Analysis
If we keep the items in heap with largest vi/wi at the root. Then
 creating the heap takes O(n) time
 while-loop now takes O(log n) time (since heap property must be restored after the
removal of root)
Example
ITEM WEIGHT VALUE

i1 6 6

i2 10 2

i3 3 1

i4 5 8

i5 1 3

i6 3 5
ITEM WEIGHT VALUE DENSITY

i1 6 6 1.000

i2 10 2 0.200

i3 3 1 0.333

ITEM WEIGHT VALUE DENSITY

i5 1 3 3.000

i6 3 5 1.667

i4 5 8 1.600

i1 6 6 1.000

i3 3 1 0.333

i2 10 2 0.200

i4 5 8 1.600

i5 1 3 3.000

i6 3 5 1.667

Values after calculation


ITEM WEIGHT VALUE TOTAL WEIGHT TOTAL BENEFIT

i5 1 3 1.000 3.000

i6 3 5 4.000 8.000

i4 5 8 9.000 16.000

i1 6 6 15.000 22.000

i3 1 0.333 16.000 22.333

So, total weight in the knapsack = 16 and total value inside it = 22.333336
MINIMUM SPANNING TREE (MST)

In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight
than all other spanning trees of the same graph. In real-world situations, this weight can be
measured as distance, congestion, traffic load or any arbitrary value denoted to the edges.

Minimum Spanning-Tree Algorithm

 Prim’s Algorithm
 Kruskal's Algorithm

Prim's algorithm

Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph.
It finds a subset of the edges that forms a tree that includes every vertex, where the total
weight of all the edges in the tree is minimized.

We start from one vertex and keep adding edges with the lowest weight until we reach our
goal.

The steps for implementing Prim's algorithm are as follows:

1. Initialize the minimum spanning tree with a vertex chosen at random.


2. Find all the edges that connect the tree to new vertices, find the minimum and add it
to the tree
3. Keep repeating step 2 until we get a minimum spanning tree

G -connected weighted graph


r -the root r of the minimum spanning tree
Q- min-priority queue
v.key - the minimum weight of any edge connecting v to a vertex in the tree
v.π - names the parent of v in the tree.
Time Complexity of Prims algorithm is O(V^2). If the input graph is represented using
adjacency list, then the time complexity of Prim’s algorithm can be reduced to O(E log V) with
the help of binary heap.

Kruskal's Algorithm

Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach. This
algorithm treats the graph as a forest and every node it has as an individual tree. A tree
connects to another only and only if, it has the least cost among all available options and
does not violate MST properties.
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree formed so far.
If cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.

Our implementation of Kruskal’s algorithm uses a disjoint-set data structure to maintain


several disjoint sets of elements. Each set contains the vertices in one tree of the current
forest. The operation FIND-SET(u) returns a representative element from the set that
contains u. Thus, we can determine whether two vertices u and v belong to the same tree by
testing whether FIND-SET(u) equals FIND-SET(v). To combine trees, Kruskal’s algorithm
calls the UNION procedure.
Time Complexity: O(ElogE) or O(ElogV). Sorting of edges takes O(ELogE) time. After
sorting, we iterate through all edges and apply find-union algorithm. The find and union
operations can take atmost O(LogV) time. So overall complexity is O(ELogE + ELogV)
time. The value of E can be atmost O(V2), so O(LogV) are O(LogE) same. Therefore, overall
time complexity is O(ElogE) or O(ElogV)

You might also like