Ada Lab Manual 008
Ada Lab Manual 008
Ada Lab Manual 008
B.E. Semester 5
(Computer Engineering)
Faculty Details:
1) Prof. H. K. GEVARIYA
2) Prof. S. A. PATEL
3) Prof. S. R. MODI
1
L. D. College of Engineering, Ahmedabad
Certificate
Place:
Date:
2
Institute’s Vision
To contribute for sustainable development of nation through achieving excellence in
technical education and research while facilitating transformation of students into responsible
citizens and competent professionals.
Institute’s Mission
● To impart affordable and quality education in order to meet the needs of industries and
achieve excellence in teaching-learning process.
● To collaborate with other academic & research institutes as well as industries in order to
strengthen education and multidisciplinary research.
● To promote equitable and harmonious growth of students, academicians, staff, society and
industries, thereby becoming a center of excellence in technical education.
Department’s Vision
▪ To Achieve Academic Excellence in Computer Engineering by Providing Value Based
Education.
Department’s Mission
▪ To produce graduates according to the needs of industry, government, society and
scientific community.
▪ To enhance the ability of students to address the real life issues by applying technical
expertise, human values and professional ethics.
▪ To inculcate habit of using free and open source software, latest technology and soft skills
so that they become competent professionals.
3
▪ To encourage faculty members to upgrade their skills and qualification through training
and higher studies at reputed universities.
● Provide computing solutions of complex problems as per business and societal needs.
● Procure requisite skills to pursue entrepreneurship, research and development, and
imbibe high degree of professionalism in the fields of computing.
● Embrace life-long learning and remain continuously employable.
● Work and excel in a highly competence supportive, multicultural and professional
environment which abiding to the legal and ethical responsibilities.
.
▪ Graduates will be able to explore and propose effective solutions to the problems in the
area of Computer Engineering as per the needs of society and industry.
▪ Graduates will be able to apply standard practice and strategies to develop quality software
products using modern techniques, programming skills, tools & an open ended
programming environment and work in a team.
▪ Graduates will manifest the skills of continuous learning in the fast changing field of
Computer Engineering
4
Programme Outcomes (POs)
5
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
6
Analysis and Design of Algorithms (3150703)
7
Analysis and Design of Algorithms (3150703)
8
Analysis and Design of Algorithms (3150703)
Index
(Progressive Assessment Sheet)
9
Analysis and Design of Algorithms (3150703)
10
220283107008 GOHIL MAYUR
Experiment No: 1
Implement a function for each of following problems and count the number of steps executed/time
taken by each function on various inputs and write equation for the growth rate of each function.
Also draw a comparative chart of number of input versus steps executed/time taken. In each of the
following function N will be passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.
Date:
Theory:
11
220283107008 GOHIL MAYUR
Implement three functions based on above steps and calculate the time taken by each
functions on various inputs ranging from 100 to 500.
Code 1:
#include <stdio.h>
#include <time.h>
int main() {
int N, sum = 0;
Code 2:
#include <stdio.h>
#include <time.h>
int main() {
int N, sum;
clock_t start_time, end_time;
double elapsed_time;
12
220283107008 GOHIL MAYUR
Code 3:
#include <stdio.h>
#include <time.h>
int main() {
int N, result;
clock_t start_time, end_time;
double elapsed_time;
13
220283107008 GOHIL MAYUR
Observations:
Write observation based on time taken executed by each algorithm.
Result: Complete the below table based on your implementation of functions and time taken by
each function.
Time taken
Inputs
Loop method Equations Recursion
0.30 0.06 0.22
100
0.47 0.05 0.62
200
0.50 0.04 0.76
300
14
220283107008 GOHIL MAYUR
Chart:
Conclusion:
In analyzing the methods to calculate the sum of numbers from 1 to N, we find that the loop-based
approach involves iterative addition, resulting in a time complexity of O(N). The equation-based
approach, using the formula N*(N+1)/2, provides a constant time solution with a time complexity
of O(1), making it the most efficient for large values of N. The recursive method, on the other
hand, relies on function calls and stack operations, resulting in a time complexity of O(N) and
potentially leading to stack overflow errors for very large values of N. In conclusion, the
equation-based approach stands out as the most efficient and scalable solution for calculating the
sum of 1 to N numbers.
Quiz:
1. What is the meaning of constant growth rate of an algorithm?
Answer: The term "constant growth rate" typically refers to the behavior of an algorithm as the
input size increases. When an algorithm is said to have a constant growth rate, it means that the
time (or space) complexity of the algorithm remains constant or doesn't significantly change as
the size of the input increases.
2. If one algorithm has a growth rate of n2 and second algorithm has a growth rate of n then
which algorithm execute faster? Why?
Answer: The algorithm with linear time complexity (n) will execute faster for sufficiently large
input sizes. This is because linear time complexity implies that the algorithm's execution time
grows linearly with the size of the input, while quadratic time complexity implies that the
execution time grows quadratically with the input size.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
15
220283107008 GOHIL MAYUR
Experiment No: 2
Write user defined functions for the following sorting methods and compare their performance by
steps executed/time taken for execution on various inputs of random nature, ascending order and
descending order sorted data. Also, draw a comparative chart of number of inputs versus steps
executed/time taken for each cases (random, ascending, and descending).
1. Selection Sort
2.Bubble Sort
3.Insertion Sort
4.Merge Sort
5.Quick Sort
Date:
Theory:
Sorting is a fundamental concept in Data Structures and Algorithms (DSA) that involves
arranging elements in a specific order, typically in ascending or descending order. It is essential
for various computational tasks and plays a crucial role in optimizing search and retrieval
operations. Sorting algorithms are classified into two main categories: comparison-based and non-
comparison-based. Comparison-based sorting algorithms, such as Quicksort, Mergesort, and
Heapsort, compare elements using a defined comparison function and rearrange them based on the
outcomes. Non-comparison-based sorting algorithms, like Counting Sort and Radix Sort, exploit
specific properties of the data to achieve linear or near-linear time complexity. The choice of
sorting algorithm depends on factors like data size, data distribution, and desired time complexity.
Efficient sorting is vital for various applications, including database management, search engines,
and data analysis, making it a central topic in DSA.
16
220283107008 GOHIL MAYUR
Implement the program to use above sorting functions and calculate the time taken by each
functions on various inputs ranging from 1000 to 5000.
#include <stdio.h>
#include <time.h>
int main()
{
int n,temp,i,a[10],j; time_t start,end; double tc;
printf("Enter total number:"); scanf("%d",&n); for(i=0;i<n;i++){
scanf("%d",&a[i]);
}
start=clock();
for(i=0;i<n;i++){ for(j=i+1;j<n;j++){ if(a[i]>a[j]){
temp=a[i]; a[i]=a[j]; a[j]=temp;
}
}
}
end=clock();
for(i=0;i<n;i++){ printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC; printf("\nTime Taken: %lf ",tc);
return 0;
}
Bubble sort:
#include <stdio.h>
#include <time.h>
int main(){
int n,a[10],t,i,j; time_t start,end; double tc;
printf("Enter the total number:"); scanf("%d",&n); for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
17
220283107008 GOHIL MAYUR
start=clock();
for(i=0;i<n-1;i++){
for(j=0;j<n-1;j++){
if(a[j]>a[j+1]){ t=a[j]; a[j]=a[j+1]; a[j+1]=t;
}
}
}
end=clock();
for(i=0;i<n;i++){ printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC; printf("\nTime Taken: %lf ",tc);
return 0;
}
#include <stdio.h>
#include <time.h>
int main() {
int n,temp,i,a[10],j,k;
time_t start,end;
double tc;
printf("Enter total number:");
scanf("%d",&n);
start=clock();
for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
for(k=0;k<i+1;k++)
{
for(j=k+1;j<i+1;j++)
{
if(a[k]>a[j])
{
temp=a[k];
a[k]=a[j];
a[j]=temp;
18
220283107008 GOHIL MAYUR
}
}
}
}
end=clock();
for(i=0;i<n;i++)
{
printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC;
printf("\nTime Taken: %lf ",tc);
return 0;
}
4. Merge Sort Program in C
#include <stdio.h>
#include <time.h>
#define max 10
int a[7]={7,6,5,4,3,2,1};
int b[10];
time_t start,end;
double tc;
19
220283107008 GOHIL MAYUR
while(l1<=m) b[i++]=a[l1++];
while(l2<=h) b[i++]=a[l2++];
for(i=l;i<=h;i++)
{
a[i]=b[i];
}
}
for(i=0;i<7;i++)
{
printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC;
printf(“\nTime Taken: %lf ",tc);
return 0;
}
20
220283107008 GOHIL MAYUR
#include <stdio.h>
#include<time.h>
if(first<last)
{
pivot=first; i=first; j=last;
while(i<j)
{
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot]) j--;
if(i<j)
{
temp=number[i]; number[i]=number[j]; number[j]=temp;
}
}
temp=number[pivot]; number[pivot]=number[j]; number[j]=temp; quicksort(number,first,j-1);
quicksort(number,j+1,last);
}
}
int main()
{
int i, count, number[25];
time_t start,end;
printf("How many elements are u going to enter?: ");
scanf("%d",&count);
21
220283107008 GOHIL MAYUR
scanf("%d",&number[i]);
start=clock();
quicksort(number,0,count-1);
end=clock();
Result: Complete the below table based on your implementation of functions and time taken by
each function. Also, prepare similar tables for ascending order sorted data and descending
order sorted data.
22
220283107008 GOHIL MAYUR
Chart:
23
220283107008 GOHIL MAYUR
Conclusion:
When we have big data in ascending order data then bubble sort will be the best case and when
we have big data in descending order data then quick sort will be the best and if we have random
data then merge sort will be the best.
24
220283107008 GOHIL MAYUR
Quiz:
1. Which sorting function execute faster in case of ascending order sorted data?
Answer: If we have ascending order sorted data then bubble sort will be executing faster
2. Which sorting function execute faster in case of descending order sorted data?
Answer: If we have descending order sorted data then quick sort will be executing faster.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
25
220283107008 GOHIL MAYUR
Experiment No: 3
Compare the performances of linear search and binary search for Best case, Average case and
Worst case inputs.
Date:
Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count for different inputs.
Theory:
A linear search, also known as a sequential search, is a method of finding an element within a list.
It checks each element of the list sequentially until a match is found or the whole list has been
searched. A simple approach to implement a linear search is:
1. Begin with the leftmost element of arr[] and one by one compare x with each element.
2. If x matches with an element, then return the index.
3. If x does not match with any of the elements, then return -1.
26
220283107008 GOHIL MAYUR
Implement function of linear search and binary search algorithm. Compare both the
algorithm on various inputs ranging from 100 to 500 for each case (Best, Average, and
Worst).
Code :
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
// Linear search
int linearSearch(int arr[], int n, int target) {
for (int i = 0; i < n; i++) {
if (arr[i] == target) {
return i;
}
}
return -1;
}
// Binary search
int binarySearch(int arr[], int n, int target) {
int left = 0, right = n - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) {
return mid;
}
if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}
int main() {
int n = 1000000; // Size of the array
int arr[n];
int target = 999999; // Element to search
// Linear search
start = clock();
int linearResult = linearSearch(arr, n, target);
end = clock();
27
220283107008 GOHIL MAYUR
return 0;
}
Observations:
Linear Search:
Time Taken: Linear search exhibits a linear time complexity, taking time proportional to the size
of the array. In the given test case, it takes a few milliseconds (typically less than 10ms) to find
the target element in an array of one million elements.
Observation: Linear search is efficient for small datasets or when the target is relatively close to
the beginning of the array. However, it becomes less efficient as the dataset size increases, and it
may take a noticeable amount of time for larger arrays.
Binary Search:
Time Taken: Binary search demonstrates a logarithmic time complexity, and it is significantly
faster than linear search. In the given test case, it typically takes a fraction of a millisecond to find
the target element in an array of one million elements.
Observation: Binary search is highly efficient, particularly when dealing with sorted datasets. It
significantly reduces the search time as it halves the search space in each iteration. This makes it
suitable for large datasets and is one of the preferred algorithms for searching in sorted arrays.
In summary, the choice of search algorithm should be based on the characteristics of the dataset.
For small datasets or unsorted data, linear search is simple and acceptable. However, for large
datasets or sorted data, binary search is the more efficient option, as it significantly reduces the
time required to locate an element.
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.
28
220283107008 GOHIL MAYUR
Chart:
Conclusion:
In conclusion, the performance evaluation of linear search and binary search demonstrates that
binary search is significantly more efficient for large datasets and sorted data. It offers faster
search times due to its logarithmic time complexity, making it the preferred choice for
applications where speed is crucial, while linear search remains suitable for smaller datasets or
unsorted data.
Quiz:
1. Which element should be searched for the best case of binary search algorithm?
Answer: element in middle
2. Which element should be searched for the worst case of binary search algorithm?
Answer: In the worst case of binary search algorithm is search for an element that’s not in the
array.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms” by E. Horowitz et al.
29
220283107008 GOHIL MAYUR
Experiment No: 4
Implement functions to print nth Fibonacci number using iteration and recursive method. Compare
the performance of two methods by time taken on various inputs. Also draw a comparative chart.
(Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is the 6th Fibonacci number).
Date:
Objectives: (a) Compare the performances of two different versions of same problem.
(b) Find the time complexity of algorithms.
(C) Understand the polynomial and non-polynomial problems
Theory:
The Fibonacci series is the sequence of numbers (also called Fibonacci numbers), where every
number is the sum of the preceding two numbers, such that the first two terms are '0' and '1'. In
some older versions of the series, the term '0' might be omitted. A Fibonacci series can thus be
given as, 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . It can thus be observed that every term can be
calculated by adding the two terms before it. We are ignoring initial zero in the series.
To represent any (n+1) th term in this series, we can give the expression as, Fn = Fn-1 + Fn-2. We can
thus represent a Fibonacci series as shown in the image below,
Code:
#include <stdio.h>
#include <time.h>
30
220283107008 GOHIL MAYUR
if (n == 1 || n == 2) {
return 1;
}
while (i < n) {
f2 = f0 + f1;
f0 = f1;
f1 = f2;
i++;
}
return f1;
}
int main() {
int n;
printf("Enter the value of n: ");
scanf("%d", &n);
return 0;
}
Code:
#include <stdio.h>
#include <time.h>
31
220283107008 GOHIL MAYUR
if (n <= 0) {
printf("Invalid input\n"); // Handle invalid input
return -1;
} else if (n == 1 || n == 2) {
return 1;
} else {
return recursiveFibonacci(n - 1) + recursiveFibonacci(n - 2);
}
}
int main() {
int n;
printf("Enter the value of n: ");
scanf("%d", &n);
if (result != -1) {
printf("The %dth Fibonacci number is: %lld\n", n, result);
}
return 0;
}
Implement functions of above two versions of Fibonacci series and compare the time taken
by both the functions on various inputs ranging from 10 to 50.
Observations:
Write observation based on time taken by both algorithms.
Result: Complete the below table based on your implementation of Iterative and Recursive
method and time taken by the function.
32
220283107008 GOHIL MAYUR
Chart:
Conclusion:
Quiz:
3. Can you execute recursive version of Fibonacci function for more inputs?
Answer: No, I will not use recursive version of Fibonacci for more inputs.
4. What do you mean by polynomial time algorithms and exponential time algorithms?
Answer: Polynomial time: The running time grows relatively slowly as the input size increases.
Exponential time: These are algorithms where the time it takes to solve a problem grows
exponentially with the input size
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
33
220283107008 GOHIL MAYUR
Experiment No: 5
Implement program to solve problem of making a change using dynamic programming.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and
if we want to change for some amount N, many combinations are possible. Suppose {d1, d2, d5,
d8}, {d0, d2, d4}, {d0, d5, d7} all are feasible solutions but the solution which selects the
minimum number of coins is considered to be an optimal solution. The aim of making a change is
to find a solution with a minimum number of coins / denominations. Clearly, this is an
optimization problem.
General assumption is that infinite coins are available for each denomination. We can select any
denomination any number of times.
Sort all the denominations and start scanning from smallest to largest denomination. In every
iteration i, if current denomination di is acceptable, then 1 coin is added in solution and total
amount is reduced by amount di. Hence,
C[i, j] = 1 + (c [i, j – di])
C[i,j] is the minimum number of coins to make change for the amount j. Below figure shows the
content of matrix C.
using coins if current denomination is larger than current problem size, then we have to skip the
denomination and stick with previously calculated solution. Hence,
34
220283107008 GOHIL MAYUR
C[i, j] = C[i – 1, j]
If above cases are not applicable then we have to stick with choice which returns minimum
number of coin. Mathematically, we formulate the problem as,
C[i, j] = min {C[i – 1, j] , 1 + C[i, j – di]}
Algorithm MAKE_A_CHANGE(d,N)
// d[1…n] = [d1,d2,…,dn] is array of n denominations
// C[1…n, 0…N] is n x N array to hold the solution of sub problems
// N is the problem size, i.e. amount for which change is required
for i ← 1 to n do
C[i, 0] ← 0
end
for i ← 1 to n do
for j ← 1 to N do
if i = = 1 ans j < d [i] then
C[i, j] ← ∞
else if i == 1 then
C[i, j] ← 1 + C[1, j – d[1])
else if j < d [i] then
C[i, j] ← C[I – 1, j]
else
C[i, j] ← min (C[i – 1, j] ,1 + C[i, j – d[i])
end
end
end
return C[n, N]
Implement above algorithm and print the matrix C. Your program should return the
number of coins required and its denominations.
Code:
#include <stdio.h>
#include <time.h>
35
220283107008 GOHIL MAYUR
if (j < d[i]) {
C[i][j] = C[i - 1][j];
} else {
C[i][j] = min(C[i - 1][j], 1 + C[i][j - d[i]]);
}
}
}
int j = N;
int i = n - 1;
while (j > 0) {
if (C[i][j] == C[i - 1][j]) {
i = i - 1;
} else {
printf("Using one %d coin\n", d[i]);
j = j - d[i];
}
}
int main() {
int d[] = {1, 3, 5};
int N = 9;
int n = sizeof(d) / sizeof(d[0]);
int C[n][N + 1];
return 0;
}
Result
36
220283107008 GOHIL MAYUR
Conclusion:
Basic operation in this above algorithm is performed within nested for loops. Hence the time
complexity of this algorithm is O(n*N), where n represents denominations and N represents
number for which we want to find change.
Quiz:
1. What is the time complexity of above algorithm?
Answer: The time complexity of above algorithm is O(n*N).
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
3. https://codecrucks.com/making-change-problem-using-dynamic-programming/
37
220283107008 GOHIL MAYUR
Experiment No: 6
Implement program of chain matrix multiplication using dynamic programming.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Given a sequence of matrices A1, A2,...,An and dimensions p0, p1,...,pn where Ai is of dimension
pi−1 × pi, determine the order of multiplication (represented, say, as a binary tree) that minimizes
the number of operations.
This algorithm does not perform the multiplications; it just determines the best order in which to
perform the multiplications
Two matrices are called compatible only if the number of columns in the first matrix and the
number of rows in the second matrix are the same. Matrix multiplication is possible only if they
are compatible. Let A and B be two compatible matrices of dimensions p x q and q x r
Suppose dimension of three matrices are:
A1 = 5 x 4
A2 = 4 x 6
A3 = 6 x 2
Matrix multiplication is associative. So
= 180
= 88
The answer of both multiplication sequences would be the same, but the numbers of
multiplications are different. This leads to the question, what order should be selected for a chain
of matrices to minimize the number of multiplications?
Let us denote the number of alternative parenthesizations of a sequence of n matrices by p(n).
When n = 1, there is only one matrix and therefore only one way to parenthesize the matrix. When
n ≥ 2, a fully parenthesized matrix product is the product of two fully parenthesized matrix
38
220283107008 GOHIL MAYUR
sub-products, and the split between the two subproducts may occur between the k and (k +
1)st matrices for any k = 1, 2, 3…, n – 1. Thus we obtain the recurrence.
The solution to the recurrence is the sequence of Catalan numbers, which grows as Ω(4n / n3/2),
roughly equal to Ω(2n). Thus, the numbers of solutions are exponential in n. A brute force attempt
is infeasible to find the solution.
Any parenthesizations of the product Ai Ai + 1 … Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is for some value of k, we first compute the matrices
Ai….k and Ak + 1…j and then multiply them together to produce the final product Ai…j The cost of
computing these parenthesizations is the cost of computing Ai….k , plus the cost of computing Ak +
1…j plus the cost of multiplying them together.
We can define m[i, j] recursively as follows. If i == j, the problem is trivial; the chain consists of
only one matrix Ai…i = A. No scalar multiplications are required. Thus m[i, i] = 0 for i = 1, 2 …n.
To compute m[i, j] when i < j, we take advantage of the structure of an optimal solution of the first
step. Let us assume that the optimal parenthesizations split the product Ai Ai + 1…Aj between
Ak and Ak + 1, where i ≤ k < j. Then m[i, j] is equal to the minimum cost for computing the
subproducts Ai…k and Ak + 1…j plus the cost of multiplying these two matrices together.
39
220283107008 GOHIL MAYUR
Code:
#include <stdio.h>
#include <time.h>
if (q < m[i][j]) {
m[i][j] = q;
s[i][j] = k;
}
}
}
}
int main() {
int arr[] = {5, 4, 6, 2, 7,10,30};
int size = sizeof(arr) / sizeof(arr[0]);
clock_t start_time = clock();
printf("Minimum number of multiplications is %d\n", MatrixChainOrder(arr, size));
clock_t end_time = clock();
40
220283107008 GOHIL MAYUR
return 0;
}
Observations:
Write observation based on whether this algorithm returns optimal number of multiplications or
not on various inputs.
Yes, the above algorithm returns optimal number of multiplications on various inputs
Result
Conclusion: Above mentioned algorithm returns optimal number of multiplications for various
inputs and time Complexity of above algorithm is O(NA3).
Quiz:
1. What is the time complexity of above algorithm?
Answer: Time complexity of above algorithm is O(NA3).
41
220283107008 GOHIL MAYUR
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
42
220283107008 GOHIL MAYUR
Experiment No: 7
Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
The Longest Common Subsequence (LCS) problem is a classic computer science problem that
involves finding the longest subsequence that is common to two given sequences.
A subsequence is a sequence that can be derived from another sequence by deleting some or no
elements without changing the order of the remaining elements. For example, given the sequence
"ABCDE", "ACE" is a subsequence of "ABCDE", but "AEC" is not a subsequence.
Given two sequences X and Y, the LCS problem involves finding the longest common
subsequence (LCS) of X and Y. The LCS need not be contiguous in the original sequences, but it
must be in the same order. For example, given the sequences "ABCDGH" and "AEDFHR", the
LCS is "ADH" with length 3.
Naïve Method:
Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence
of X whether it is a subsequence of Y, and return the longest common subsequence found. There
are 2m subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n)
time. Thus, the naïve algorithm would take O(n2m) time.
43
220283107008 GOHIL MAYUR
Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is
done by tracing the path where the counter incremented first.
Step 4 − The longest common subseqence obtained by noting the elements in the traced path.
Consider the example, we have two strings X=BDCB and Y=BACDB to find the longest common
subsequence. Following table shows the construction of LCS table.
Once the values are filled, the path is traced back from the last value in the table at T[5, 4].
Algorithm is as below:
44
220283107008 GOHIL MAYUR
B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1] + 1
B[i, j] := ‘L’
return C and B
Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.
Code:
#include <stdio.h>
#include <string.h>
#include <time.h>
45
220283107008 GOHIL MAYUR
return 0;
}
Observations:
Write observation based on whether this algorithm returns optimal answer or not on various
inputs.
Ans : This method generates optimal answer for various inputs.
46
220283107008 GOHIL MAYUR
Result
Conclusion:
The tabulation method to find LCS generates optimal answer for various types of inputs
Quiz:
1. What is the time complexity of above algorithm?
Answer: The time complexity of above algorithm is O(m*n)
3. Does Dynamic programming approach to find LCS perform well compare to naïve approach?
Answer: Yes
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
47
220283107008 GOHIL MAYUR
Experiment No: 8
Implement program to solve Knapsack problem using dynamic programming.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, . . . . .
, xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, ....... , vn> are
weight and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity. Select items from X and fill the
knapsack such that it would maximize the profit.
Knapsack problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either
add an entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Following are the steps to implement binary knapsack using dynamic programming.
48
220283107008 GOHIL MAYUR
The above algorithm will just tell us the maximum value we can earn with dynamic programming.
It does not speak anything about which items should be selected. We can find the items that give
optimum result using the following algorithm.
Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end
49
220283107008 GOHIL MAYUR
int j = W;
int i = n;
while (j > 0) {
if (k[i][j] == k[i - 1][j]) {
i = i - 1;
} else {
printf("In the knapsack, we have one item of %d weight and value %d\n", w[i - 1], v[i -
1]);
j = j - w[i - 1];
}
}
return k[n][W];
}
int main() {
int n = 5;
int w[] = {1, 2, 3, 4, 5};
int v[] = {5, 10, 17, 25, 30};
int W = 10;
50
220283107008 GOHIL MAYUR
return 0;
}
Observations:
Write observation based on whether this algorithm returns optimal answer or not on various
inputs. Above algorithm solve 0-1 knapsack problem in which either it select item or not.. But it
doesn't give optimal answer for all inputs.
Result
Conclusion:
Above algorithm gives maximum weight that can be kept in knapsack. But knapsack problem can
be solved efficiently using greedy approach in which it can breaks the item(Fractional Knapsack
Problem).
Quiz:
1. What is the time complexity of above binary knapsack algorithm?
Answer: The time complexity of binary knapsack algorithm is O(N*W) where N denotes the
number of items available and W denotes the capacity of knapsack.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
51
220283107008 GOHIL MAYUR
Experiment No: 9
Implement program for solution of fractional Knapsack problem using greedy design technique.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Knapsack problem is as stated below:
Given a set of items, each having different weight and value or profit associated with it. Find the
set of items such that the total weight is less than or equal to a capacity of the knapsack and the
total value earned is as large as possible.
Brute-force approach: The brute-force approach tries all the possible solutions with all the
different fractions but it is a time-consuming approach.
Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly, we
will select the item. The item with the highest ratio would be selected first.
Following are the steps to implement fractional knapsack using greedy design strategy.
52
220283107008 GOHIL MAYUR
Implement the program based on above logic for the solution of fractional knapsack
problem.
Code:
#include <stdio.h>
#include <stdlib.h>
53
220283107008 GOHIL MAYUR
int main() {
int n, capacity;
printf("Enter the number of items: ");
scanf("%d", &n);
struct Item items[n];
fractionalKnapsack(items, n, capacity);
return 0;
}
Observations:
The fractional knapsack algorithm returns an optimal answer when items are sorted based on
value-to-weight ratios or have equal ratios, but it may not provide an optimal solution in unsorted
input with varying ratios.
Result
54
220283107008 GOHIL MAYUR
Conclusion:
In conclusion, the fractional knapsack algorithm's optimality depends on the order and equality of
value-to-weight ratios within the input; it provides an optimal solution when items are sorted or
have equal ratios but may yield suboptimal results for unsorted inputs with varying ratios.
Quiz:
1. What is the time complexity of above knapsack algorithm?
Answer: The time complexity of the provided fractional knapsack algorithm is O(n log n),
primarily due to the sorting step, where 'n' represents the number of items.
3. What is the time complexity solving knapsack problem using brute-force method?
Answer: The time complexity for solving the knapsack problem using a brute-force method is
exponential, specifically O(2^n), where 'n' represents the number of items.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
55
220283107008 GOHIL MAYUR
Experiment No: 10
Implement program for solution of Making Change problem using greedy design technique.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and if
we want to change for some amount N, many combinations are possible. {d1, d2, d5, d8}, {d0, d2,
d4}, {d0, d5, d7} can be considered as all feasible solutions if sum of their denomination is N. The
aim of making a change is to find a solution with a minimum number of coins / denominations.
Following are the steps to solve coin change problem using greedy design technique
56
220283107008 GOHIL MAYUR
Implement the program based on above steps for the solution of make a change using greedy
problem.
Code:
#include <stdio.h>
if (coinCounts[i] > 0) {
printf("%d x %d ", denominations[i], coinCounts[i]);
}
}
printf("\n");
}
int main() {
int denominations[] = {25, 10, 5, 1}; // Example denominations in descending order
int n = sizeof(denominations) / sizeof(denominations[0]);
int amount;
makeChange(denominations, n, amount);
return 0;
}
Observations:
The provided fractional knapsack algorithm does not always return an optimal answer. It may
provide suboptimal solutions for unsorted input with varying value-to-weight ratios. the greedy
algorithm for the coin change problem returns optimal answers when coin denominations are
57
220283107008 GOHIL MAYUR
provided in descending order but may provide suboptimal solutions when denominations are
unsorted or in cases where the minimization of total coin value is crucial. The effectiveness of this
algorithm depends on the specific characteristics of the denominations and the problem it is
applied to.
Result
Conclusion: the greedy algorithm for the coin change problem is effective in returning optimal
solutions when coin denominations are arranged in descending order but may provide suboptimal
results when denominations are unsorted or when minimizing the total coin value is critical. Its
performance is closely tied to the specific characteristics of the denominations and the nature of
the problem, making it a valuable approach in certain scenarios but not universally optimal.
Quiz:
1. What is the time complexity of above knapsack algorithm?
Answer: The time complexity of the given Making Change algorithm depends on the specific
implementation, but it generally has a time complexity of O(N), where N is the amount for which
change is being calculated.
4. What is the difference between the unbounded coin change problem and the limited coin
change problem?
Answer:The difference between the unbounded coin change problem and the limited coin change
problem is that in the unbounded version, you can use an unlimited number of each coin
denomination to make change, while in the limited version, you are constrained by a finite supply
of each coin denomination.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
58
220283107008 GOHIL MAYUR
Experiment No: 11
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Objectives: (a) Understand how to use Kruskal's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.
Theory:
In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other
words, an MST is a subset of the edges of the graph that form a tree and have the smallest sum of
weights.
59
220283107008 GOHIL MAYUR
#include <stdio.h>
#include <stdlib.h>
// Initialize subsets
for (int i = 0; i < V; i++) {
subsets[i].parent = i;
subsets[i].rank = 0;
}
60
220283107008 GOHIL MAYUR
// If including this edge does not create a cycle, add it to the result and increment the index
if (x != y) {
result[e++] = next_edge;
unionSets(subsets, x, y);
}
}
free(subsets);
}
int main() {
int V = 4; // Number of vertices
int E = 5; // Number of edges
struct Edge edges[] = {
{0, 1, 10},
{0, 2, 6},
{0, 3, 5},
{1, 3, 15},
{2, 3, 4}
};
kruskalMST(edges, V, E);
return 0;
}
Observations:
Kruskal's algorithm consistently returns a minimum spanning tree when all edge weights are non-
negative and unique. However, it may provide suboptimal results in the presence of negative
weights and may not always select the same edge when dealing with ties, potentially leading to
different MSTs in such cases.
61
220283107008 GOHIL MAYUR
Result
Conclusion:
Kruskal's algorithm for finding the Minimum Spanning Tree reliably returns the optimal solution
in graphs with non-negative, unique edge weights, ensuring the minimum total weight. However,
it may produce suboptimal results in graphs containing negative edge weights, potentially leading
to a Minimum Spanning Forest. Additionally, when dealing with ties in edge weights, the
algorithm may not consistently select the same edge, introducing variability in the MST solution.
Therefore, its effectiveness depends on the specific characteristics of the graph and the weights of
its edges.
Quiz:
1. What is the time complexity of krushkal’s algorithm?
Answer: The time complexity of Kruskal's algorithm is O(E log E), where 'E' represents the
number of edges in the graph.
3. What data structure is typically used to keep track of the connected components in Kruskal's
algorithm?
Answer: A disjoint-set data structure (typically implemented as a union-find data structure) is
used to keep track of the connected components in Kruskal's algorithm.
4. When does Kruskal's algorithm stop adding edges to the minimum spanning tree?
Answer: Kruskal's algorithm stops adding edges to the minimum spanning tree when the number
of edges added reaches (V - 1), where 'V' is the number of vertices in the graph, ensuring the
formation of a spanning tree.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
62
220283107008 GOHIL MAYUR
Experiment No: 12
Implement a program for Prim's algorithm to find a minimum spanning tree.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Objectives: (a) Understand how to use Prim's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.
Theory:
In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other
words, an MST is a subset of the edges of the graph that form a tree and have the smallest sum of
weights.
63
220283107008 GOHIL MAYUR
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
return min_index;
}
key[0] = 0;
parent[0] = -1;
64
220283107008 GOHIL MAYUR
int main() {
int V = 5; // Number of vertices
int graph[V][V] = {
{0, 2, 0, 6, 0},
{2, 0, 3, 8, 5},
{0, 3, 0, 0, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0}
};
primMST(graph, V);
return 0;
}
Observations:
The provided Prim's algorithm consistently returns the minimum spanning tree (MST) when
applied to various inputs with non-negative edge weights.
Result
Conclusion: the Prim's algorithm consistently returns the minimum spanning tree in graphs with
non-negative edge weights, ensuring the optimal solution. It effectively constructs the MST by
iteratively selecting edges of minimum weight, making it a reliable choice for solving this
problem. However, it may not be suitable for graphs with negative edge weights, as it's designed
for non-negative weighted graphs.
Quiz:
65
220283107008 GOHIL MAYUR
3. When does Prim's algorithm stop adding edges to the minimum spanning tree?
Answer: Prim's algorithm stops adding edges to the minimum spanning tree when all vertices are
included in the tree.
4. What data structure is typically used to keep track of the vertices in Prim's algorithm?
Answer: In Prim's algorithm, a data structure typically used to keep track of the vertices is a
priority queue or a binary heap.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
66
220283107008 GOHIL MAYUR
Experiment No: 13
Implement DFS and BFS graph traversal techniques and write its time complexities.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Depth First Search is a graph traversal algorithm that explores as far as possible along each branch
before backtracking. It is used to search for a node or a path in a graph, and is implemented
recursively or iteratively.
The algorithm starts at a specified node and visits all the nodes in the graph by exploring each
branch as far as possible before backtracking to the previous node. When a node is visited, it is
marked as visited to prevent loops.
67
220283107008 GOHIL MAYUR
Implement DFS and BFS graph traversal techniques and write its time complexities.
struct node {
int vertex;
struct node* next;
};
struct Graph {
int numVertices;
int* visited;
struct node** adjLists;
};
graph->visited[vertex] = 1;
printf("Visited %d\n", vertex);
if (graph->visited[connectedVertex] == 0) {
DFS(graph, connectedVertex);
}
temp = temp->next;
}
}
// Create a node
struct node* createNode(int v) {
struct node* newNode = malloc(sizeof(struct node));
newNode->vertex = v;
newNode->next = NULL;
return newNode;
}
// Create a graph
struct Graph* createGraph(int vertices) {
struct Graph* graph = malloc(sizeof(struct Graph));
graph->numVertices = vertices;
68
220283107008 GOHIL MAYUR
int i;
for (i = 0; i < vertices; i++) {
graph->adjLists[i] = NULL;
graph->visited[i] = 0;
}
return graph;
}
// Add an edge
void addEdge(struct Graph* graph, int src, int dest) {
// Add an edge from src to dest
struct node* newNode = createNode(dest);
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;
int main() {
struct Graph* graph = createGraph(4);
addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 2);
addEdge(graph, 2, 3);
printGraph(graph);
start = clock();
DFS(graph, 2);
end = clock();
69
220283107008 GOHIL MAYUR
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
struct node {
int vertex;
struct node* next;
};
struct Graph {
int numVertices;
int* visited;
struct node** adjLists;
};
graph->visited[vertex] = 1;
printf("Visited %d\n", vertex);
if (graph->visited[connectedVertex] == 0) {
DFS(graph, connectedVertex);
}
temp = temp->next;
}
}
// Create a node
struct node* createNode(int v) {
struct node* newNode = malloc(sizeof(struct node));
newNode->vertex = v;
newNode->next = NULL;
return newNode;
}
// Create a graph
70
220283107008 GOHIL MAYUR
int i;
for (i = 0; i < vertices; i++) {
graph->adjLists[i] = NULL;
graph->visited[i] = 0;
}
return graph;
}
// Add an edge
void addEdge(struct Graph* graph, int src, int dest) {
// Add an edge from src to dest
struct node* newNode = createNode(dest);
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;
int main() {
struct Graph* graph = createGraph(4);
addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 2);
addEdge(graph, 2, 3);
printGraph(graph);
71
220283107008 GOHIL MAYUR
start = clock();
DFS(graph, 2);
end = clock();
return 0;
}
Observations:
In BFS, the algorithm traverses the node closest to the starting node first, while in DFS, it
explores as deeply as possible along each branch before backtracking.
Result
BFS
DFS
72
220283107008 GOHIL MAYUR
Conclusion: DFS and BFS are essential graph traversal algorithms, with BFS exploring level by
level for shortest paths and DFS diving deep before backtracking. Both have linear time
complexities, making them versatile for graph-related tasks. The choice depends on the problem's
nature.
Quiz:
1. What data structure is typically used in the iterative implementation of DFS and BFS?
Answer: In the iterative implementation of DFS and BFS, a stack is typically used for DFS, and a
queue is used for BFS.
2. What is the time complexity of DFS on a graph with V vertices and E edges?
Answer: The time complexity of DFS on a graph with V vertices and E edges is O(V + E), where
'V' represents the number of vertices, and 'E' is the number of edges.
3. What is the time complexity of BFS on a graph with V vertices and E edges?
Answer: The time complexity of BFS on a graph with V vertices and E edges is also O(V + E).
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
73
220283107008 GOHIL MAYUR
Experiment No: 14
Implement Rabin-Karp string matching algorithm.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
It is a string searching algorithm that is named after its authors Richard M. Carp and Michael O.
Rabin. This algorithm is used to find all the occurrences of a given pattern ‘P’’ in a given string
‘S’ in O(Ns + Np) time in average case, where ‘Ns’ and ‘Np’ are the lengths of ‘S’’ and ‘P’,
respectively.
We can see that “xyz” is occurring in “cxyzghxyzvjkxyz” at three positions. So, we have to print
that pattern ‘P’ is occurring in string ‘S’ at indices 1, 6, and 12.
Naive Pattern Searching (brute force) algorithm slides the pattern over text one by one and checks
for a match. If a match is found, then slide by 1 again to check for subsequent matches. This
approach has a time complexity of O(P* (S-P)).
The Rabin-Karp algorithm starts by computing, at each index of the text, the hash value of the
string starting at that particular index with the same length as the pattern. If the hash value of that
equals to the hash value of the given pattern, then it does a full match at that particular index.
Rabin Karp algorithm first computes the hash value of pattern P and first Np characters from S. If
hash values are same, we check the equality of actual strings. If the pattern is found, then it is
called hit. Otherwise, it is called a spurious hit. If hash values are not same, no need to compare
actual strings.
1. Calculate the hash value of the pattern: The hash value of the pattern is calculated using a hash
function, which takes the pattern as input and produces a hash value as output.
74
220283107008 GOHIL MAYUR
2. Calculate the hash values of all the possible substrings of the same length in the text: The hash
values of all the possible substrings of the same length as the pattern are calculated using the
same hash function.
3. Compare the hash value of the pattern with the hash values of all the possible substrings: If a
match is found, the algorithm checks the characters of the pattern and the substring to verify
that they are indeed equal.
4. Move on to the next possible substring: If the characters do not match, the algorithm moves on
to the next possible substring and repeats the process until all possible substrings have been
compared.
Implement the Rabin-Karp algorithm based on above steps and give different input text and
pattern to check its correctness. Also, find the time complexity of your implemented
algorithm.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
// Function to calculate the hash value of a string
unsigned long long calculateHash(char *str, int len) {
unsigned long long hash = 0;
for (int i = 0; i < len; i++) {
hash = (hash * 256 + str[i]) % 101; // Use a prime number as modulo for better distribution
}
return hash;
}
// Function to implement the Rabin-Karp algorithm
void rabinKarp(char *text, char *pattern) {
int n = strlen(text);
int m = strlen(pattern);
int prime = 101; // Prime number for hashing
unsigned long long patternHash = calculateHash(pattern, m);
unsigned long long textHash = calculateHash(text, m);
for (int i = 0; i <= n - m; i++) {
if (patternHash == textHash) {
// If hash matches, check character by character
int j;
for (j = 0; j < m; j++) {
if (text[i + j] != pattern[j])
break;
}
if (j == m) {
// Pattern found at position i
printf("Pattern found at index %d\n", i);
}
}
if (i < n - m) {
textHash = (256 * (textHash - text[i] * (unsigned long long)(pow(256, m - 1))) + text[i +
m]) % prime;
if (textHash < 0)
textHash += prime;
}
75
220283107008 GOHIL MAYUR
}
}
int main() {
char text1[] = "abracadabra";
char pattern1[] = "abra";
char text2[] = "hello, this is a test text for Rabin-Karp algorithm";
char pattern2[] = "test";
char text3[] = "algorithm";
char pattern3[] = "algo";
clock_t start, end;
double cpu_time_used;
start = clock();
rabinKarp(text1, pattern1);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 1: %.2f ms\n", cpu_time_used);
start = clock();
rabinKarp(text2, pattern2);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 2: %.2f ms\n", cpu_time_used);
start = clock();
rabinKarp(text3, pattern3);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 3: %.2f ms\n", cpu_time_used);
return 0;
}
Observations:
The Rabin-Karp algorithm can effectively find a pattern in a given text. Its time complexity in the
worst case is O((n - m + 1) * m), where 'n' is the length of the text, and 'm' is the length of the
pattern. In the average case, it has a better time complexity due to its hashing approach.
Result
76
220283107008 GOHIL MAYUR
Conclusion: In conclusion, the Rabin-Karp algorithm is a practical method for pattern matching
within text. It efficiently computes hash values for substrings and significantly improves search
times. Its worst-case time complexity is dependent on the text and pattern lengths, making it a
valuable tool for text searching applications.
Quiz:
1. What is the Rabin-Karp algorithm used for?
Answer: The Rabin-Karp algorithm is used for pattern matching within text.
2. What is the time complexity of the Rabin-Karp algorithm in the average case?
Answer: The time complexity of the Rabin-Karp algorithm in the average case is O(n + m),
where 'n' is the length of the text, and 'm' is the length of the pattern.
3. What is the main advantage of the Rabin-Karp algorithm over the brute force algorithm for
string matching?
Answer: The main advantage of the Rabin-Karp algorithm over the brute force algorithm for
string matching is its ability to efficiently match patterns using hash values, resulting in faster
search times.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
77