Ada Lab Manual 008

Download as pdf or txt
Download as pdf or txt
You are on page 1of 77

L. D.

College of Engineering, Ahmedabad

A Laboratory Manual for

Analysis and Design of Algorithms


(3150703)

B.E. Semester 5
(Computer Engineering)

Faculty Details:
1) Prof. H. K. GEVARIYA
2) Prof. S. A. PATEL
3) Prof. S. R. MODI
1
L. D. College of Engineering, Ahmedabad

Certificate

This is to certify that Mr./Ms.


Enrollment No. of B.E. Semester
Computer Engineering of this Institute (GTU Code: ) has satisfactorily
completed the Practical / Tutorial work for the subject Analysis and Design
of Algorithms (3150703) for the academic year 2023-2024.

Place:
Date:

Name and Sign of Faculty member

Head of the Department

2
Institute’s Vision
To contribute for sustainable development of nation through achieving excellence in
technical education and research while facilitating transformation of students into responsible
citizens and competent professionals.

Institute’s Mission
● To impart affordable and quality education in order to meet the needs of industries and
achieve excellence in teaching-learning process.

● To create a conducive research ambience that drives innovation and nurtures


research-oriented scholars and outstanding professionals.

● To collaborate with other academic & research institutes as well as industries in order to
strengthen education and multidisciplinary research.

● To promote equitable and harmonious growth of students, academicians, staff, society and
industries, thereby becoming a center of excellence in technical education.

● To practice and encourage high standards of professional ethics, transparency and


accountability.

Department’s Vision
▪ To Achieve Academic Excellence in Computer Engineering by Providing Value Based
Education.

Department’s Mission
▪ To produce graduates according to the needs of industry, government, society and
scientific community.

▪ To develop partnership with industries, research and development organizations and


government sectors for continuous improvement of faculties and students.

▪ To motivate students for participating in reputed conferences, workshops, seminars and


technical events to make them technocrats and entrepreneurs.

▪ To enhance the ability of students to address the real life issues by applying technical
expertise, human values and professional ethics.

▪ To inculcate habit of using free and open source software, latest technology and soft skills
so that they become competent professionals.

3
▪ To encourage faculty members to upgrade their skills and qualification through training
and higher studies at reputed universities.

Program Educational Objectives (PEOs)

● Provide computing solutions of complex problems as per business and societal needs.
● Procure requisite skills to pursue entrepreneurship, research and development, and
imbibe high degree of professionalism in the fields of computing.
● Embrace life-long learning and remain continuously employable.
● Work and excel in a highly competence supportive, multicultural and professional
environment which abiding to the legal and ethical responsibilities.
.

Program Specific Outcomes (PSOs)

▪ Graduates will be able to explore and propose effective solutions to the problems in the
area of Computer Engineering as per the needs of society and industry.
▪ Graduates will be able to apply standard practice and strategies to develop quality software
products using modern techniques, programming skills, tools & an open ended
programming environment and work in a team.
▪ Graduates will manifest the skills of continuous learning in the fast changing field of
Computer Engineering

4
Programme Outcomes (POs)

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis
of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant
to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.

5
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

6
Analysis and Design of Algorithms (3150703)

Practical – Course Outcome matrix

Course Outcomes (Cos):

1. Analyze the asymptotic performance of algorithms.


2. Derive and solve recurrences describing the performance of divide-and-conquer algorithms.
3. Find optimal solution by applying various methods.
4. Apply pattern matching algorithms to find particular pattern.
5. Differentiate polynomial and nonpolynomial problems.
6. Explain the major graph algorithms and their analyses. Employ graphs to model engineering
problems, when appropriate.
Sr. CO CO CO CO CO CO
Objective(s) of Experiment
No. 1 2 3 4 5 6

Implement a function for each of following problems


and count the number of steps executed/time taken by
each function on various inputs and write time
complexity of each function. Also draw a comparative
1. chart of number of input versus steps executed/time

taken. In each of the following function N will be
passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.
Write user defined functions for the following sorting
methods and compare their performance by steps
executed/time taken for execution on various inputs of
random nature, ascending order and descending order
sorted data. Also draw a comparative chart of number
of input versus steps executed/time taken for each cases
2. √ √
(random, ascending, and descending).
1. Selection Sort
2. Bubble Sort
3. Insertion Sort
4. Merge Sort
5. Quick Sort
Compare the performance of linear search and binary
3. search for Best case, Average case and Worst case √ √
inputs.
Implement functions to print nth Fibonacci number
using iteration and recursive method. Compare the
4. √ √
performance of two methods by counting number of
steps executed on various inputs. Also, draw a

7
Analysis and Design of Algorithms (3150703)

comparative chart. (Fibonacci series 1, 1, 2, 3, 5, 8….


Here 8 is the 6th Fibonacci number).
Implement program to solve problem of making a
5. √
change using dynamic programming.
Implement program of chain matrix multiplication
6. √
using dynamic programming.
Implement program to solve LCS problem using
7. √
dynamic programing.
Implement program to solve Knapsack problem using
8. √
dynamic programming.
Implement program for solution of fractional Knapsack
9. √
problem using greedy design technique.
Implement program for solution of Making Change
10. √
problem using greedy design technique
Implement program for Kruskal's algorithm to find
11. √ √
minimum spanning tree.
Implement program for Prim's algorithm to find
12. √ √
minimum spanning tree.
Implement DFS and BFS graph traversal techniques
13. √
and write its time complexities.

14. Implement Rabin-Karp string matching algorithm. √

8
Analysis and Design of Algorithms (3150703)

Index
(Progressive Assessment Sheet)

Sr. No. Objective(s) of Experiment Page Date of Date of Assessme Sign. of


No. perform submiss nt Teacher
ance ion Marks with date

Implement a function for each of following


problems and count the number of steps
executed/time taken by each function on
various inputs and write time complexity of
each function. Also draw a comparative chart
of number of input versus steps executed/time
1. taken. In each of the following function N will
be passed by user.
1. To calculate sum of 1 to N numbers using
loop.
2. To calculate sum of 1 to N numbers using
equation.
3. To calculate sum of 1 to N numbers using
recursion.
Write user defined functions for the following
sorting methods and compare their
performance by steps executed/time taken for
execution on various inputs of random nature,
ascending order and descending order sorted
data. Also draw a comparative chart of number
of input versus steps executed/time taken for
2.
each cases (random, ascending, and
descending).
1.Selection Sort
2.Bubble Sort
3.Insertion Sort
4.Merge Sort
5.Quick Sort
Compare the performance of linear search and
3. binary search for Best case, Average case and
Worst case inputs.
Implement functions to print nth Fibonacci
number using iteration and recursive method.
4. Compare the performance of two methods by
counting number of steps executed on various
inputs. Also, draw a comparative chart.

9
Analysis and Design of Algorithms (3150703)

(Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is


the 6th Fibonacci number).
Implement program to solve problem of
5.
making a change using dynamic programming.
Implement program of chain matrix
6.
multiplication using dynamic programming.
Implement program to solve LCS problem
7.
using dynamic programing.
Implement program to solve Knapsack
8.
problem using dynamic programming.
Implement program for solution of fractional
9. Knapsack problem using greedy design
technique.
Implement program for solution of Making
10.
Change problem using greedy design technique
Implement program for Kruskal's algorithm to
11.
find minimum spanning tree.
Implement program for Prim's algorithm to
12.
find minimum spanning tree.
Implement DFS and BFS graph traversal
13.
techniques and write its time complexities.
Implement Rabin-Karp string matching
14.
algorithm.
Total

10
220283107008 GOHIL MAYUR

Experiment No: 1
Implement a function for each of following problems and count the number of steps executed/time
taken by each function on various inputs and write equation for the growth rate of each function.
Also draw a comparative chart of number of input versus steps executed/time taken. In each of the
following function N will be passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1

Objectives: (a) Compare performance of various algorithms


(b) Judge best algorithm in terms of growth rate or steps executed

Equipment/Instruments: Computer System, Any C language editor

Theory:

1. Below are the steps to calculate sum of 1 to N numbers using loop

1. Take an input value for N.


2. Initialize a variable sum to zero.
3. Start a loop that iterates from i=1 to i=N.
4. In each iteration, add the value of i to the sum variable.
5. After the loop completes, output the value of sum.

2. Below are the steps to calculate sum of 1 to N numbers using equation

1. Take an input value for N.


2. Calculate sum as N*(N+1)/2.
3. Output the value of sum.

3. Below are the steps to calculate sum of 1 to N numbers using recursion

1. Take an input value for N.


2. Define a recursive function sum(n) that takes an integer argument n and returns the
sum of 1 to n.
3. In the sum(n) function, check if n equals 1. If it does, return 1 (the base case).
4. Otherwise, return the sum of n and the result of calling sum(n-1).
5. Output the result.

11
220283107008 GOHIL MAYUR

Implement three functions based on above steps and calculate the time taken by each
functions on various inputs ranging from 100 to 500.

Code 1:

#include <stdio.h>
#include <time.h>

int main() {
int N, sum = 0;

// Input the value of N


printf("Enter a positive integer N: ");
scanf("%d", &N);

// Check if N is a positive integer


if (N <= 0) {
printf("Please enter a positive integer.\n");
return 1; // Exit with an error code
}

// Record the start time


clock_t start_time = clock();

// Calculate the sum of numbers from 1 to N


for (int i = 1; i <= N; i++) {
sum += i;
}

// Record the end time


clock_t end_time = clock();

// Calculate the elapsed time in seconds with precision


double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC;

// Output the result and running time with precision


printf("The sum of numbers from 1 to %d is: %d\n", N, sum);
printf("Time taken by the program: %.6f seconds\n", elapsed_time);

return 0; // Exit successfully


}

Code 2:
#include <stdio.h>
#include <time.h>

int main() {
int N, sum;
clock_t start_time, end_time;
double elapsed_time;

// Input the value of N

12
220283107008 GOHIL MAYUR

printf("Enter a positive integer N: ");


scanf("%d", &N);

// Check if N is a positive integer


if (N <= 0) {
printf("Please enter a positive integer.\n");
return 1; // Exit with an error code
}

// Record the start time


start_time = clock();

// Calculate the sum using the equation N*(N+1)/2


sum = N * (N + 1) / 2;

// Record the end time


end_time = clock();

// Calculate the elapsed time in seconds with precision


elapsed_time = (double)(end_time - start_time) / CLOCKS_PER_SEC;

// Output the result and running time with precision


printf("The sum of numbers from 1 to %d is: %d\n", N, sum);
printf("Time taken by the program: %.6f seconds\n", elapsed_time);

return 0; // Exit successfully


}

Code 3:

#include <stdio.h>
#include <time.h>

// Recursive function to calculate the sum of 1 to n


int sum_recursive(int n) {
if (n == 1) {
return 1;
} else {
return n + sum_recursive(n - 1);
}
}

int main() {
int N, result;
clock_t start_time, end_time;
double elapsed_time;

// Input the value of N


printf("Enter a positive integer N: ");
scanf("%d", &N);

// Check if N is a positive integer


if (N <= 0) {

13
220283107008 GOHIL MAYUR

printf("Please enter a positive integer.\n");


return 1; // Exit with an error code
}

// Record the start time


start_time = clock();

// Calculate the sum using recursion


result = sum_recursive(N);

// Record the end time


end_time = clock();

// Calculate the elapsed time in seconds with precision


elapsed_time = (double)(end_time - start_time) / CLOCKS_PER_SEC;

// Output the result and running time with precision


printf("The sum of numbers from 1 to %d is: %d\n", N, result);
printf("Time taken by the program: %.6f seconds\n", elapsed_time);

return 0; // Exit successfully


}

Observations:
Write observation based on time taken executed by each algorithm.

Result: Complete the below table based on your implementation of functions and time taken by
each function.

Time taken
Inputs
Loop method Equations Recursion
0.30 0.06 0.22
100
0.47 0.05 0.62
200
0.50 0.04 0.76
300

400 0.62 0.07 0.86

0.83 0.06 0.87


500
O(n) O(1) O(n)
Equation□

14
220283107008 GOHIL MAYUR

Chart:

Conclusion:
In analyzing the methods to calculate the sum of numbers from 1 to N, we find that the loop-based
approach involves iterative addition, resulting in a time complexity of O(N). The equation-based
approach, using the formula N*(N+1)/2, provides a constant time solution with a time complexity
of O(1), making it the most efficient for large values of N. The recursive method, on the other
hand, relies on function calls and stack operations, resulting in a time complexity of O(N) and
potentially leading to stack overflow errors for very large values of N. In conclusion, the
equation-based approach stands out as the most efficient and scalable solution for calculating the
sum of 1 to N numbers.

Quiz:
1. What is the meaning of constant growth rate of an algorithm?
Answer: The term "constant growth rate" typically refers to the behavior of an algorithm as the
input size increases. When an algorithm is said to have a constant growth rate, it means that the
time (or space) complexity of the algorithm remains constant or doesn't significantly change as
the size of the input increases.

2. If one algorithm has a growth rate of n2 and second algorithm has a growth rate of n then
which algorithm execute faster? Why?
Answer: The algorithm with linear time complexity (n) will execute faster for sufficiently large
input sizes. This is because linear time complexity implies that the algorithm's execution time
grows linearly with the size of the input, while quadratic time complexity implies that the
execution time grows quadratically with the input size.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

15
220283107008 GOHIL MAYUR

Experiment No: 2
Write user defined functions for the following sorting methods and compare their performance by
steps executed/time taken for execution on various inputs of random nature, ascending order and
descending order sorted data. Also, draw a comparative chart of number of inputs versus steps
executed/time taken for each cases (random, ascending, and descending).
1. Selection Sort
2.Bubble Sort
3.Insertion Sort
4.Merge Sort
5.Quick Sort
Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1, CO2

Objectives: (a) Compare performance of various algorithms.


(b) Judge best sorting algorithm on sorted and random inputs in terms of growth
rate/time complexity.
(c) Derive time complexity from steps count on various inputs.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Sorting is a fundamental concept in Data Structures and Algorithms (DSA) that involves
arranging elements in a specific order, typically in ascending or descending order. It is essential
for various computational tasks and plays a crucial role in optimizing search and retrieval
operations. Sorting algorithms are classified into two main categories: comparison-based and non-
comparison-based. Comparison-based sorting algorithms, such as Quicksort, Mergesort, and
Heapsort, compare elements using a defined comparison function and rearrange them based on the
outcomes. Non-comparison-based sorting algorithms, like Counting Sort and Radix Sort, exploit
specific properties of the data to achieve linear or near-linear time complexity. The choice of
sorting algorithm depends on factors like data size, data distribution, and desired time complexity.
Efficient sorting is vital for various applications, including database management, search engines,
and data analysis, making it a central topic in DSA.

16
220283107008 GOHIL MAYUR

Implement the program to use above sorting functions and calculate the time taken by each
functions on various inputs ranging from 1000 to 5000.

1. Selection Sort Program in C

#include <stdio.h>
#include <time.h>
int main()
{
int n,temp,i,a[10],j; time_t start,end; double tc;
printf("Enter total number:"); scanf("%d",&n); for(i=0;i<n;i++){
scanf("%d",&a[i]);
}
start=clock();
for(i=0;i<n;i++){ for(j=i+1;j<n;j++){ if(a[i]>a[j]){
temp=a[i]; a[i]=a[j]; a[j]=temp;
}
}

}
end=clock();
for(i=0;i<n;i++){ printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC; printf("\nTime Taken: %lf ",tc);
return 0;
}

2. Bubble Sort Program in C

Bubble sort:

#include <stdio.h>
#include <time.h>
int main(){
int n,a[10],t,i,j; time_t start,end; double tc;
printf("Enter the total number:"); scanf("%d",&n); for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
}

17
220283107008 GOHIL MAYUR

start=clock();
for(i=0;i<n-1;i++){
for(j=0;j<n-1;j++){
if(a[j]>a[j+1]){ t=a[j]; a[j]=a[j+1]; a[j+1]=t;
}
}
}
end=clock();
for(i=0;i<n;i++){ printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC; printf("\nTime Taken: %lf ",tc);
return 0;
}

3. Insertion Sort Program in C

#include <stdio.h>
#include <time.h>
int main() {
int n,temp,i,a[10],j,k;
time_t start,end;
double tc;
printf("Enter total number:");
scanf("%d",&n);
start=clock();
for(i=0;i<n;i++)
{
scanf("%d",&a[i]);
for(k=0;k<i+1;k++)
{
for(j=k+1;j<i+1;j++)
{
if(a[k]>a[j])
{
temp=a[k];
a[k]=a[j];
a[j]=temp;

18
220283107008 GOHIL MAYUR

}
}
}
}
end=clock();
for(i=0;i<n;i++)
{
printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC;
printf("\nTime Taken: %lf ",tc);
return 0;
}
4. Merge Sort Program in C

#include <stdio.h>
#include <time.h>
#define max 10

int a[7]={7,6,5,4,3,2,1};
int b[10];
time_t start,end;
double tc;

void merging(int l, int m, int h)


{
int l1,l2,i;
for(l1=l, l2=m+1, i=l; l1<=m && l2<=h; i++)
{
if(a[l1]<a[l2])
{
b[i]=a[l1++];
}
else
b[i]=a[l2++];
}

19
220283107008 GOHIL MAYUR

while(l1<=m) b[i++]=a[l1++];
while(l2<=h) b[i++]=a[l2++];
for(i=l;i<=h;i++)
{
a[i]=b[i];
}
}

void sort(int l, int h)


{
if(l<h)
{
int m; m=(l+h)/2;
sort(l,m);
sort(m+1,h);
merging(l,m,h);
}
}
int main()
{
printf("Hello");
int i;
start=clock();
sort(0,6);
end=clock();

for(i=0;i<7;i++)
{
printf("%d ",a[i]);
}
tc=difftime(end, start)/CLOCKS_PER_SEC;
printf(“\nTime Taken: %lf ",tc);
return 0;
}

20
220283107008 GOHIL MAYUR

5. Quick Sort Program in C

#include <stdio.h>
#include<time.h>

void quicksort(int number[25],int first,int last)


{
int i, j, pivot, temp;

if(first<last)
{
pivot=first; i=first; j=last;

while(i<j)
{
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot]) j--;
if(i<j)
{
temp=number[i]; number[i]=number[j]; number[j]=temp;
}
}
temp=number[pivot]; number[pivot]=number[j]; number[j]=temp; quicksort(number,first,j-1);
quicksort(number,j+1,last);
}
}
int main()
{
int i, count, number[25];
time_t start,end;
printf("How many elements are u going to enter?: ");
scanf("%d",&count);

printf("Enter %d elements: ", count);


for(i=0;i<count;i++)

21
220283107008 GOHIL MAYUR

scanf("%d",&number[i]);

start=clock();
quicksort(number,0,count-1);
end=clock();

printf("Order of Sorted elements: "); for(i=0;i<count;i++)


printf(" %d",number[i]);

tc=difftime(end, start)/CLOCKS_PER_SEC; printf("\nTime Taken: %lf ",tc);


return 0;
}
Observations:
Selection Sort: Inefficient for large datasets due to its quadratic time complexity (O(n^2)).
Bubble Sort: Inefficient for large datasets; also quadratic in time (O(n^2)).
Insertion Sort: Suitable for small datasets or nearly sorted data but less efficient for larger
datasets (O(n^2)).
Merge Sort: Efficient and consistent, with a time complexity of O(n log n) for various dataset
sizes.
Quick Sort: Highly efficient for most datasets, with an average time complexity of O(n log n),
though it can degrade in rare worst-case scenarios to O(n^2).

Result: Complete the below table based on your implementation of functions and time taken by
each function. Also, prepare similar tables for ascending order sorted data and descending
order sorted data.

Time taken (Random Data)


Inputs
Selection Bubble Insertion Merge Quick
1000 1.56 2.30 0.81 0.17 0.08
2000 4.15 6.98 2.23 2.29 0.18
3000 13.32 11.54 5.68 0.43 0.29
4000 20.91 34.54 9.37 0.70 0.35
5000 37.56 55.40 14.79 0.78 0.44
Time O(n log n) O(n^2)
O(N^2) O(N^2) O(N^2)
Complexity□

Time taken (Ascending order sorted data)


Inputs
Selection Bubble Insertion Merge Quick

1000 1.04 1.32 0.00 0.09 1.44

22
220283107008 GOHIL MAYUR

2000 5.81 3.11 0.01 0.20 3.76

3000 6.29 6.68 0.01 0.21 4.66

4000 20.39 16.65 0.01 0.39 8.49

5000 27.87 18.71 0.02 0.36 8.74

Time O(N log O(N log N)


O(N^2) O(N) O(N)
Complexity□ N))

Time taken (Descending order sorted data)


Inputs
Selection Bubble Insertion Merge Quick

1000 1.27 1.80 1.41 0.09 2.71

2000 3.01 6.34 4.61 0.19 8.23

3000 6.55 14.29 12.81 0.32 19.61

4000 17.48 28.18 22.12 0.39 33.74

5000 30.59 49.61 27.09 0.50 67.25

Time O(N log O(N log N)


O(N^2) O(N) O(N)
Complexity□ N))

Chart:

Sample Chart is as below:

23
220283107008 GOHIL MAYUR

Conclusion:

When we have big data in ascending order data then bubble sort will be the best case and when
we have big data in descending order data then quick sort will be the best and if we have random
data then merge sort will be the best.

24
220283107008 GOHIL MAYUR

Quiz:
1. Which sorting function execute faster in case of ascending order sorted data?
Answer: If we have ascending order sorted data then bubble sort will be executing faster

2. Which sorting function execute faster in case of descending order sorted data?
Answer: If we have descending order sorted data then quick sort will be executing faster.

3. Which sorting function execute faster in case of random data?


Answer: If we have random data then merge sort will be executing faster.

4. On what kind of data, the best case of Bubble sort occurs?


Answer: Ascending order sorted data will be the best case of bubble sort.

5. On what kind of data, the worst case of Bubble sort occurs?


Answer: Random and descending order data will be the worst case for bubble sort

6. On what kind of data, the best case of Quick sort occurs?


Answer: Descending order sorted data will be the best case for quick sort.

7. On what kind of data, the worst case of Quick sort occurs?


Answer: Ascending order sorted data will be the worst case of the quick sort

8. Which sorting algorithms are in-place sorting algorithms?


Answer: Bubble sort, selection sort, insertion sort and quick sort are in-place sorting algorithms

9. Which sorting algorithms are stable sorting algorithms?


Answer: Merge sort and quick sort are the stable sorting algorithms.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

25
220283107008 GOHIL MAYUR

Experiment No: 3
Compare the performances of linear search and binary search for Best case, Average case and
Worst case inputs.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis, and Mathematical skills

Relevant CO: CO1, CO2

Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count for different inputs.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Steps to implement linear search are as below:

A linear search, also known as a sequential search, is a method of finding an element within a list.
It checks each element of the list sequentially until a match is found or the whole list has been
searched. A simple approach to implement a linear search is:

1. Begin with the leftmost element of arr[] and one by one compare x with each element.
2. If x matches with an element, then return the index.
3. If x does not match with any of the elements, then return -1.

Steps to implement binary search are as below:

1. Take an input sorted array A of n elements and a target value T.


2. Define variables start and end to represent the start and end indices of the search range,
initially set to 0 and n-1 respectively.
3. Repeat the following steps while start <= end:
a. Calculate the midpoint index mid as (start + end) / 2.
b. If the value of the midpoint element A[mid] is equal to the target value T, return
the value of mid.
c. If the value of the midpoint element A[mid] is greater than the target value T, set
end to mid-1.
d. If the value of the midpoint element A[mid] is less than the target value T, set start
to mid+1.
4. If the target value T is not found in the array, return -1.
5. Output the value returned in Step 3, representing the position of the target value T in the array.

26
220283107008 GOHIL MAYUR

Implement function of linear search and binary search algorithm. Compare both the
algorithm on various inputs ranging from 100 to 500 for each case (Best, Average, and
Worst).

Code :
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

// Linear search
int linearSearch(int arr[], int n, int target) {
for (int i = 0; i < n; i++) {
if (arr[i] == target) {
return i;
}
}
return -1;
}

// Binary search
int binarySearch(int arr[], int n, int target) {
int left = 0, right = n - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) {
return mid;
}
if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1;
}

int main() {
int n = 1000000; // Size of the array
int arr[n];
int target = 999999; // Element to search

// Populate the array with numbers from 0 to n-1


for (int i = 0; i < n; i++) {
arr[i] = i;
}

clock_t start, end;


double cpu_time_used;

// Linear search
start = clock();
int linearResult = linearSearch(arr, n, target);
end = clock();

27
220283107008 GOHIL MAYUR

cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;


printf("Linear Search Result: Index %d, Time: %.2f ms\n", linearResult, cpu_time_used);

// Binary search (requires a sorted array)


start = clock();
int binaryResult = binarySearch(arr, n, target);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Binary Search Result: Index %d, Time: %.2f ms\n", binaryResult, cpu_time_used);

return 0;
}

Observations:
Linear Search:

Time Taken: Linear search exhibits a linear time complexity, taking time proportional to the size
of the array. In the given test case, it takes a few milliseconds (typically less than 10ms) to find
the target element in an array of one million elements.
Observation: Linear search is efficient for small datasets or when the target is relatively close to
the beginning of the array. However, it becomes less efficient as the dataset size increases, and it
may take a noticeable amount of time for larger arrays.

Binary Search:

Time Taken: Binary search demonstrates a logarithmic time complexity, and it is significantly
faster than linear search. In the given test case, it typically takes a fraction of a millisecond to find
the target element in an array of one million elements.
Observation: Binary search is highly efficient, particularly when dealing with sorted datasets. It
significantly reduces the search time as it halves the search space in each iteration. This makes it
suitable for large datasets and is one of the preferred algorithms for searching in sorted arrays.
In summary, the choice of search algorithm should be based on the characteristics of the dataset.
For small datasets or unsorted data, linear search is simple and acceptable. However, for large
datasets or sorted data, binary search is the more efficient option, as it significantly reduces the
time required to locate an element.

Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.

Time taken (Best Case)


Inputs
Linear Search Binary Search
100 0.0100 0.0035
200 0.0100 0.0050
300 0.0100 0.0054
400 0.0100 0.0061
500 0.0100 0.0057
Time Complexity□ O(1) O(1)

28
220283107008 GOHIL MAYUR

Chart:

Conclusion:
In conclusion, the performance evaluation of linear search and binary search demonstrates that
binary search is significantly more efficient for large datasets and sorted data. It offers faster
search times due to its logarithmic time complexity, making it the preferred choice for
applications where speed is crucial, while linear search remains suitable for smaller datasets or
unsorted data.

Quiz:
1. Which element should be searched for the best case of binary search algorithm?
Answer: element in middle

2. Which element should be searched for the worst case of binary search algorithm?
Answer: In the worst case of binary search algorithm is search for an element that’s not in the
array.

3. Which algorithm executes faster in worst case?


Answer: Binary search executes faster in the worst case compared to linear search.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms” by E. Horowitz et al.

References used by the students:


1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein

29
220283107008 GOHIL MAYUR
Experiment No: 4
Implement functions to print nth Fibonacci number using iteration and recursive method. Compare
the performance of two methods by time taken on various inputs. Also draw a comparative chart.
(Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is the 6th Fibonacci number).

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis

Relevant CO: CO1, CO5

Objectives: (a) Compare the performances of two different versions of same problem.
(b) Find the time complexity of algorithms.
(C) Understand the polynomial and non-polynomial problems

Equipment/Instruments: Computer System, Any C language editor

Theory:

The Fibonacci series is the sequence of numbers (also called Fibonacci numbers), where every
number is the sum of the preceding two numbers, such that the first two terms are '0' and '1'. In
some older versions of the series, the term '0' might be omitted. A Fibonacci series can thus be
given as, 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . It can thus be observed that every term can be
calculated by adding the two terms before it. We are ignoring initial zero in the series.

To represent any (n+1) th term in this series, we can give the expression as, Fn = Fn-1 + Fn-2. We can
thus represent a Fibonacci series as shown in the image below,

Iterative version to print nth Fibonacci number is as below:


Input: An integer n, where n >= 1.
Output: The nth Fibonacci number.
Steps:
Initialize variables f0 = 1, f1 = 1, and i = 2.
If n is 1 or 2 then
Print 1
While i < n, do
a. Set f2 = f0 + f1.
b. Set f0 = f1.
c. Set f1 = f2.
d. Increment i by 1.
Print f1.

Code:

#include <stdio.h>
#include <time.h>

30
220283107008 GOHIL MAYUR

long long int iterativeFibonacci(int n) {


long long int f0 = 1, f1 = 1, f2;
int i = 2;

if (n == 1 || n == 2) {
return 1;
}

while (i < n) {
f2 = f0 + f1;
f0 = f1;
f1 = f2;
i++;
}

return f1;
}

int main() {
int n;
printf("Enter the value of n: ");
scanf("%d", &n);

clock_t start_time = clock();


long long int result = iterativeFibonacci(n);
clock_t end_time = clock();

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds

printf("The %dth Fibonacci number is: %lld\n", n, result);


printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Recursive version to print nth Fibonacci number is as below:

Input: An integer n, where n >= 1.


Output: The nth Fibonacci number.
If n is 1 or 2 then
return 1.
else recursively compute next number using the (n-1) th and (n-2) th Fibonacci numbers, and return
their sum.
Print the result.

Code:

#include <stdio.h>
#include <time.h>

// Recursive function to calculate the nth Fibonacci number


long long int recursiveFibonacci(int n) {

31
220283107008 GOHIL MAYUR

if (n <= 0) {
printf("Invalid input\n"); // Handle invalid input
return -1;
} else if (n == 1 || n == 2) {
return 1;
} else {
return recursiveFibonacci(n - 1) + recursiveFibonacci(n - 2);
}
}

int main() {
int n;
printf("Enter the value of n: ");
scanf("%d", &n);

clock_t start_time = clock();


long long int result = recursiveFibonacci(n);
clock_t end_time = clock();

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds

if (result != -1) {
printf("The %dth Fibonacci number is: %lld\n", n, result);
}

printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Implement functions of above two versions of Fibonacci series and compare the time taken
by both the functions on various inputs ranging from 10 to 50.

Observations:
Write observation based on time taken by both algorithms.

Result: Complete the below table based on your implementation of Iterative and Recursive
method and time taken by the function.

Time taken (Random data)


Inputs
Iterative Fibonacci Recursive Fibonacci
10 0.0123 0.0020
20 0.0150 0.0820
0.0178 7.8110
30
40 0.0152 834.19
50 0.0183 12140.7
Time Complexity□ O(n) O(2^n)

32
220283107008 GOHIL MAYUR

Chart:

Conclusion:

Quiz:

1. What is the time complexity of iterative version of Fibonacci function?


Answer: The time complexity of the iterative version of the Fibonacci function is O(n).

2. What is the time complexity of recursive version of Fibonacci function?


Answer: The time complexity of the recursive version of the Fibonacci function is exponential,
specifically O(2^n).

3. Can you execute recursive version of Fibonacci function for more inputs?
Answer: No, I will not use recursive version of Fibonacci for more inputs.

4. What do you mean by polynomial time algorithms and exponential time algorithms?
Answer: Polynomial time: The running time grows relatively slowly as the input size increases.
Exponential time: These are algorithms where the time it takes to solve a problem grows
exponentially with the input size

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

33
220283107008 GOHIL MAYUR

Experiment No: 5
Implement program to solve problem of making a change using dynamic programming.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and
if we want to change for some amount N, many combinations are possible. Suppose {d1, d2, d5,
d8}, {d0, d2, d4}, {d0, d5, d7} all are feasible solutions but the solution which selects the
minimum number of coins is considered to be an optimal solution. The aim of making a change is
to find a solution with a minimum number of coins / denominations. Clearly, this is an
optimization problem.

General assumption is that infinite coins are available for each denomination. We can select any
denomination any number of times.

Solution steps are as follow:

Sort all the denominations and start scanning from smallest to largest denomination. In every
iteration i, if current denomination di is acceptable, then 1 coin is added in solution and total
amount is reduced by amount di. Hence,
C[i, j] = 1 + (c [i, j – di])

C[i,j] is the minimum number of coins to make change for the amount j. Below figure shows the
content of matrix C.

Figure: Content of matrix C

using coins if current denomination is larger than current problem size, then we have to skip the
denomination and stick with previously calculated solution. Hence,

34
220283107008 GOHIL MAYUR

C[i, j] = C[i – 1, j]

If above cases are not applicable then we have to stick with choice which returns minimum
number of coin. Mathematically, we formulate the problem as,
C[i, j] = min {C[i – 1, j] , 1 + C[i, j – di]}

Steps to solve making change problem are as below:

Algorithm MAKE_A_CHANGE(d,N)
// d[1…n] = [d1,d2,…,dn] is array of n denominations
// C[1…n, 0…N] is n x N array to hold the solution of sub problems
// N is the problem size, i.e. amount for which change is required

for i ← 1 to n do
C[i, 0] ← 0
end
for i ← 1 to n do
for j ← 1 to N do
if i = = 1 ans j < d [i] then
C[i, j] ← ∞
else if i == 1 then
C[i, j] ← 1 + C[1, j – d[1])
else if j < d [i] then
C[i, j] ← C[I – 1, j]
else
C[i, j] ← min (C[i – 1, j] ,1 + C[i, j – d[i])
end
end
end
return C[n, N]

Implement above algorithm and print the matrix C. Your program should return the
number of coins required and its denominations.

Code:

#include <stdio.h>
#include <time.h>

int min(int a, int b) {


return a < b ? a : b;
}

int min_no_coin(int n, int d[], int N, int C[][N + 1]) {


for (int j = 0; j < n; j++) {
C[j][0] = 0;
}
for (int j = 0; j <= N; j++) {
C[0][j] = j;
}
for (int i = 1; i < n; i++) {
for (int j = 1; j <= N; j++) {

35
220283107008 GOHIL MAYUR

if (j < d[i]) {
C[i][j] = C[i - 1][j];
} else {
C[i][j] = min(C[i - 1][j], 1 + C[i][j - d[i]]);
}
}
}

int j = N;
int i = n - 1;
while (j > 0) {
if (C[i][j] == C[i - 1][j]) {
i = i - 1;
} else {
printf("Using one %d coin\n", d[i]);
j = j - d[i];
}
}

return C[n - 1][N];


}

int main() {
int d[] = {1, 3, 5};
int N = 9;
int n = sizeof(d) / sizeof(d[0]);
int C[n][N + 1];

clock_t start_time = clock();


printf("Minimum coins required for amount %d are %d\n", N, min_no_coin(n, d, N, C));
clock_t end_time = clock();

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds
printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Observations: This algorithm return’s optimal answer on various inputs

Result

36
220283107008 GOHIL MAYUR

Conclusion:
Basic operation in this above algorithm is performed within nested for loops. Hence the time
complexity of this algorithm is O(n*N), where n represents denominations and N represents
number for which we want to find change.

Quiz:
1. What is the time complexity of above algorithm?
Answer: The time complexity of above algorithm is O(n*N).

2. Does above algorithm always return optimal answer?


Answer: Yes, above algorithm always return optimal answer.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
3. https://codecrucks.com/making-change-problem-using-dynamic-programming/

References used by the students:


1. https://codecrucks.com/making-change-problem-using-dynamic-programming/

37
220283107008 GOHIL MAYUR

Experiment No: 6
Implement program of chain matrix multiplication using dynamic programming.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Given a sequence of matrices A1, A2,...,An and dimensions p0, p1,...,pn where Ai is of dimension
pi−1 × pi, determine the order of multiplication (represented, say, as a binary tree) that minimizes
the number of operations.
This algorithm does not perform the multiplications; it just determines the best order in which to
perform the multiplications

Two matrices are called compatible only if the number of columns in the first matrix and the
number of rows in the second matrix are the same. Matrix multiplication is possible only if they
are compatible. Let A and B be two compatible matrices of dimensions p x q and q x r
Suppose dimension of three matrices are:

A1 = 5 x 4
A2 = 4 x 6
A3 = 6 x 2
Matrix multiplication is associative. So

(A1 A2) A3 = {(5 x 4) x (4 x 6)} x (6 x 2)


= (5 x 4 x 6) + (5 x 6 x 2)

= 180

A1 (A2 A3) = (5 x 4) x {(4 x 6) x (6 x 2)}


= (5 x 4 x 2) + (4 x 6 x 2)

= 88

The answer of both multiplication sequences would be the same, but the numbers of
multiplications are different. This leads to the question, what order should be selected for a chain
of matrices to minimize the number of multiplications?
Let us denote the number of alternative parenthesizations of a sequence of n matrices by p(n).
When n = 1, there is only one matrix and therefore only one way to parenthesize the matrix. When
n ≥ 2, a fully parenthesized matrix product is the product of two fully parenthesized matrix

38
220283107008 GOHIL MAYUR

sub-products, and the split between the two subproducts may occur between the k and (k +
1)st matrices for any k = 1, 2, 3…, n – 1. Thus we obtain the recurrence.

The solution to the recurrence is the sequence of Catalan numbers, which grows as Ω(4n / n3/2),
roughly equal to Ω(2n). Thus, the numbers of solutions are exponential in n. A brute force attempt
is infeasible to find the solution.

Any parenthesizations of the product Ai Ai + 1 … Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is for some value of k, we first compute the matrices
Ai….k and Ak + 1…j and then multiply them together to produce the final product Ai…j The cost of
computing these parenthesizations is the cost of computing Ai….k , plus the cost of computing Ak +
1…j plus the cost of multiplying them together.

We can define m[i, j] recursively as follows. If i == j, the problem is trivial; the chain consists of
only one matrix Ai…i = A. No scalar multiplications are required. Thus m[i, i] = 0 for i = 1, 2 …n.
To compute m[i, j] when i < j, we take advantage of the structure of an optimal solution of the first
step. Let us assume that the optimal parenthesizations split the product Ai Ai + 1…Aj between
Ak and Ak + 1, where i ≤ k < j. Then m[i, j] is equal to the minimum cost for computing the
subproducts Ai…k and Ak + 1…j plus the cost of multiplying these two matrices together.

Where d = {d0, d1, d2, …, dn} is the vector of matrix dimensions.


m[i, j] = Least number of multiplications required to multiply matrix sequence Ai….Aj .
Steps to solve chain matrix multiplication problem are as below:
Algorithm MATRIX_CHAIN_ORDER(p)
// p is sequence of n matrices
n ← length(p) - 1
for i ← 1 to n do
m[i,i] ← 0
end
for l ← 2 to n do
for i ← 1 to n – l + 1 do
j←i+l-1
m[i, j] ← ∞
for k ← i to j – 1 do
q ← m[i, k]+ m[k + 1, j] + di – 1 * dk * dj
if q < m[i, j] then
m[i,j ] ← q
s[i, j] ← k
end
end
end
end
return m and s

39
220283107008 GOHIL MAYUR

Implement the above algorithm and print the matrix m ans c.

Code:

#include <stdio.h>
#include <time.h>

int MatrixChainOrder(int p[], int n) {


int m[n][n];
int s[n][n];

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
m[i][j] = 0;
}
}

for (int i = 1; i < n; i++) {


m[i][i] = 0;
}

for (int L = 2; L < n; L++) {


for (int i = 1; i < n - L + 1; i++) {
int j = i + L - 1;
m[i][j] = 9999;

for (int k = i; k <= j - 1; k++) {


int q = m[i][k] + m[k + 1][j] + p[i - 1] * p[k] * p[j];

if (q < m[i][j]) {
m[i][j] = q;
s[i][j] = k;
}
}
}
}

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) {
printf("%d\t\t", m[i][j]);
}
printf("\n");
}

return m[1][n - 1];


}

int main() {
int arr[] = {5, 4, 6, 2, 7,10,30};
int size = sizeof(arr) / sizeof(arr[0]);
clock_t start_time = clock();
printf("Minimum number of multiplications is %d\n", MatrixChainOrder(arr, size));
clock_t end_time = clock();

40
220283107008 GOHIL MAYUR

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds
printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Observations:
Write observation based on whether this algorithm returns optimal number of multiplications or
not on various inputs.
Yes, the above algorithm returns optimal number of multiplications on various inputs

Result

Conclusion: Above mentioned algorithm returns optimal number of multiplications for various
inputs and time Complexity of above algorithm is O(NA3).

Quiz:
1. What is the time complexity of above algorithm?
Answer: Time complexity of above algorithm is O(NA3).

2. Does above algorithm always return optimal answer?


Answer: Yes, the provided algorithm for the matrix multiplication problem, implemented using
dynamic programming will always return the optimal answer.

41
220283107008 GOHIL MAYUR

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: "Introduction to Algorithms" by Thomas H. Cormen, Charles


E. Leiserson, Ronald L. Rivest, and Clifford Stein

42
220283107008 GOHIL MAYUR

Experiment No: 7
Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

The Longest Common Subsequence (LCS) problem is a classic computer science problem that
involves finding the longest subsequence that is common to two given sequences.

A subsequence is a sequence that can be derived from another sequence by deleting some or no
elements without changing the order of the remaining elements. For example, given the sequence
"ABCDE", "ACE" is a subsequence of "ABCDE", but "AEC" is not a subsequence.

Given two sequences X and Y, the LCS problem involves finding the longest common
subsequence (LCS) of X and Y. The LCS need not be contiguous in the original sequences, but it
must be in the same order. For example, given the sequences "ABCDGH" and "AEDFHR", the
LCS is "ADH" with length 3.

Naïve Method:

Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence
of X whether it is a subsequence of Y, and return the longest common subsequence found. There
are 2m subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n)
time. Thus, the naïve algorithm would take O(n2m) time.

longest common subsequence (LCS) using Dynamic Programming:


Let X=<x1,x2,x3....,xm> and Y=<y1,y2,y3....,ym> be the sequences. To compute the length of an
element the following algorithm is used.
Step 1 − Construct an empty adjacency table with the size, n × m, where n = size of
sequence X and m = size of sequence Y. The rows in the table represent the elements in sequence
X and columns represent the elements in sequence Y.
Step 2 − The zeroth rows and columns must be filled with zeroes. And the remaining values are
filled in based on different cases, by maintaining a counter value.
● Case 1 − If the counter encounters common element in both X and Y sequences, increment
the counter by 1.
● Case 2 − If the counter does not encounter common elements in X and Y sequences at T[i,
j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j].

43
220283107008 GOHIL MAYUR

Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is
done by tracing the path where the counter incremented first.
Step 4 − The longest common subseqence obtained by noting the elements in the traced path.
Consider the example, we have two strings X=BDCB and Y=BACDB to find the longest common
subsequence. Following table shows the construction of LCS table.

Once the values are filled, the path is traced back from the last value in the table at T[5, 4].

Algorithm is as below:

Algorithm: LCS-Length-Table-Formulation (X, Y)


m := length(X)
n := length(Y)
for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do
C[0, j] := 0
for i = 1 to m do
for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1

44
220283107008 GOHIL MAYUR

B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1] + 1
B[i, j] := ‘L’
return C and B

Algorithm: Print-LCS (B, X, i, j)


if i=0 and j=0
return
if B[i, j] = ‘D’
Print-LCS(B, X, i-1, j-1)
Print(xi)
else if B[i, j] = ‘U’
Print-LCS(B, X, i-1, j)
else
Print-LCS(B, X, i, j-1)

Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.

Code:
#include <stdio.h>
#include <string.h>
#include <time.h>

int max(int a, int b) {


return (a > b) ? a : b;
}

void LCS(char* str1, char* str2) {


int m = strlen(str1);
int n = strlen(str2);
int dp[m + 1][n + 1];

for (int i = 0; i <= m; i++) {


for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0) {
dp[i][j] = 0;
} else if (str1[i - 1] == str2[j - 1]) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]);
}
}
}

printf("The length of the LCS is %d\n", dp[m][n]);


int index = dp[m][n];

45
220283107008 GOHIL MAYUR

char lcs[index + 1];


lcs[index] = '\0';
int i = m, j = n;

while (i > 0 && j > 0) {


if (str1[i - 1] == str2[j - 1]) {
lcs[index - 1] = str1[i - 1];
i--;
j--;
index--;
} else if (dp[i - 1][j] > dp[i][j - 1]) {
i--;
} else {
j--;
}
}
printf("Longest Common Subsequence: %s\n", lcs);
}
int main() {
char str1[100], str2[100];

printf("Enter the first string: ");


scanf("%s", str1);

printf("Enter the second string: ");


scanf("%s", str2);

clock_t start_time = clock();


LCS(str1, str2);
clock_t end_time = clock();

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds
printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Observations:
Write observation based on whether this algorithm returns optimal answer or not on various
inputs.
Ans : This method generates optimal answer for various inputs.

46
220283107008 GOHIL MAYUR

Result

Conclusion:
The tabulation method to find LCS generates optimal answer for various types of inputs

Quiz:
1. What is the time complexity of above algorithm?
Answer: The time complexity of above algorithm is O(m*n)

2. Does above algorithm always return optimal answer?


Answer: Yes, above algorithm always return optimal answer

3. Does Dynamic programming approach to find LCS perform well compare to naïve approach?
Answer: Yes

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:


● "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L.
Rivest, and Clifford Stein
● “Fundamentals of Algorithms”by E. Horowitz et al.

47
220283107008 GOHIL MAYUR

Experiment No: 8
Implement program to solve Knapsack problem using dynamic programming.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand Dynamic programming algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Knapsack problem is as stated below:


Given a set of items, each having different weight and value or profit associated with it. Find the
set of items such that the total weight is less than or equal to a capacity of the knapsack and the
total value earned is as large as possible.

The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, . . . . .
, xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, ....... , vn> are
weight and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity. Select items from X and fill the
knapsack such that it would maximize the profit.
Knapsack problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either
add an entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Following are the steps to implement binary knapsack using dynamic programming.

Algorithm DP_BINARY_KNAPSACK (V, W, M)


// Description: Solve binary knapsack problem using dynamic programming
// Input: Set of items X, set of weight W, profit of items V and knapsack capacity M
// Output: Array V, which holds the solution of problem
for i ← 1 to n do
V[i, 0] ← 0
end
for i ← 1 to M do
V[0, i] ← 0
end
for V[0, i] ← 0 do
for j ← 0 to M do
if w[i] ≤ j then
V[i, j] ← max{V[i-1, j], v[i] + V[i – 1, j – w[i]]}
else

48
220283107008 GOHIL MAYUR

V[i, j] ← V[i – 1, j] // w[i]>j


end
end
end

The above algorithm will just tell us the maximum value we can earn with dynamic programming.
It does not speak anything about which items should be selected. We can find the items that give
optimum result using the following algorithm.

Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { }
i←n
j←M
while ( j> 0 ) do
if (V[i, j] == V[i – 1, j]) then
i←i–1
else
V[i, j] ← V[i, j] – vi
j ← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i]
end
end

49
220283107008 GOHIL MAYUR

Implement program to solve Knapsack problem using dynamic programming.


Code:
#include <stdio.h>
#include <time.h>

int max(int a, int b) {


return a > b ? a : b;
}

int knap(int n, int W, int w[], int v[]) {


int k[n + 1][W + 1];

for (int i = 0; i <= n; i++) {


for (int j = 0; j <= W; j++) {
if (i == 0 || j == 0) {
k[i][j] = 0;
} else if (w[i - 1] <= j) {
k[i][j] = max(k[i - 1][j], v[i - 1] + k[i - 1][j - w[i - 1]]);
} else {
k[i][j] = k[i - 1][j];
}
}
}

int j = W;
int i = n;
while (j > 0) {
if (k[i][j] == k[i - 1][j]) {
i = i - 1;
} else {
printf("In the knapsack, we have one item of %d weight and value %d\n", w[i - 1], v[i -
1]);
j = j - w[i - 1];
}
}

return k[n][W];
}

int main() {
int n = 5;
int w[] = {1, 2, 3, 4, 5};
int v[] = {5, 10, 17, 25, 30};
int W = 10;

clock_t start_time = clock();


int result = knap(n, W, w, v);
clock_t end_time = clock();

double elapsed_time = ((double)(end_time - start_time)) / CLOCKS_PER_SEC * 1000.0; //


Convert to milliseconds

printf("The maximum value in the knapsack is %d\n", result);

50
220283107008 GOHIL MAYUR

printf("Time taken by the program: %.6f milliseconds\n", elapsed_time);

return 0;
}

Observations:
Write observation based on whether this algorithm returns optimal answer or not on various
inputs. Above algorithm solve 0-1 knapsack problem in which either it select item or not.. But it
doesn't give optimal answer for all inputs.

Result

Conclusion:
Above algorithm gives maximum weight that can be kept in knapsack. But knapsack problem can
be solved efficiently using greedy approach in which it can breaks the item(Fractional Knapsack
Problem).

Quiz:
1. What is the time complexity of above binary knapsack algorithm?
Answer: The time complexity of binary knapsack algorithm is O(N*W) where N denotes the
number of items available and W denotes the capacity of knapsack.

2. Does above algorithm always return optimal answer?


Answer: No, above algorithm doesn't return optimal answer always

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: "Introduction to Algorithms" by Thomas H. Cormen, Charles


E. Leiserson, Ronald L. Rivest, and Clifford Stein

51
220283107008 GOHIL MAYUR

Experiment No: 9
Implement program for solution of fractional Knapsack problem using greedy design technique.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand greedy algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:
Knapsack problem is as stated below:
Given a set of items, each having different weight and value or profit associated with it. Find the
set of items such that the total weight is less than or equal to a capacity of the knapsack and the
total value earned is as large as possible.
Brute-force approach: The brute-force approach tries all the possible solutions with all the
different fractions but it is a time-consuming approach.
Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly, we
will select the item. The item with the highest ratio would be selected first.
Following are the steps to implement fractional knapsack using greedy design strategy.

1. Compute the value-to-weight ratio for each item in the knapsack.


2. Sort the items in decreasing order of value-to-weight ratio.
3. Initialize the total weight and total value to 0.
4. For each item in the sorted list:
a. If the entire item can be added to the knapsack without exceeding the weight
capacity, add it and update the total weight and total value.
b. If the item cannot be added entirely, add a fraction of the item that fits into the
knapsack and update the total weight and total value accordingly.
c. If the knapsack is full, stop the algorithm.
5. Return the total value and the set of items in the knapsack.

52
220283107008 GOHIL MAYUR

Implement the program based on above logic for the solution of fractional knapsack
problem.
Code:
#include <stdio.h>
#include <stdlib.h>

// Structure to represent items


struct Item {
int weight;
int value;
double valueToWeightRatio;
};

// Function to compare items based on their value-to-weight ratio for sorting


int compareItems(const void *a, const void *b) {
struct Item *itemA = (struct Item *)a;
struct Item *itemB = (struct Item *)b;
return (itemB->valueToWeightRatio - itemA->valueToWeightRatio);
}

// Function to solve the fractional knapsack problem


void fractionalKnapsack(struct Item items[], int n, int capacity) {
// Calculate the value-to-weight ratio for each item
for (int i = 0; i < n; i++) {
items[i].valueToWeightRatio = (double)items[i].value / items[i].weight;
}

// Sort the items based on their value-to-weight ratio in decreasing order


qsort(items, n, sizeof(struct Item), compareItems);

int currentWeight = 0; // Current weight in the knapsack


double totalValue = 0.0; // Total value obtained

for (int i = 0; i < n; i++) {


// If adding the entire item doesn't exceed the capacity, add it
if (currentWeight + items[i].weight <= capacity) {
currentWeight += items[i].weight;
totalValue += items[i].value;
} else {
// Otherwise, add a fraction of the item
double fraction = (double)(capacity - currentWeight) / items[i].weight;
totalValue += items[i].value * fraction;
break; // Knapsack is full
}
}

53
220283107008 GOHIL MAYUR

printf("Total value obtained: %.2lf\n", totalValue);


}

int main() {
int n, capacity;
printf("Enter the number of items: ");
scanf("%d", &n);
struct Item items[n];

for (int i = 0; i < n; i++) {


printf("Enter weight and value for item %d: ", i + 1);
scanf("%d %d", &items[i].weight, &items[i].value);
}

printf("Enter the knapsack capacity: ");


scanf("%d", &capacity);

fractionalKnapsack(items, n, capacity);

return 0;
}

Observations:
The fractional knapsack algorithm returns an optimal answer when items are sorted based on
value-to-weight ratios or have equal ratios, but it may not provide an optimal solution in unsorted
input with varying ratios.

Result

54
220283107008 GOHIL MAYUR

Conclusion:
In conclusion, the fractional knapsack algorithm's optimality depends on the order and equality of
value-to-weight ratios within the input; it provides an optimal solution when items are sorted or
have equal ratios but may yield suboptimal results for unsorted inputs with varying ratios.

Quiz:
1. What is the time complexity of above knapsack algorithm?
Answer: The time complexity of the provided fractional knapsack algorithm is O(n log n),
primarily due to the sorting step, where 'n' represents the number of items.

2. Does above algorithm always return optimal answer?


Answer: The provided fractional knapsack algorithm does not always return an optimal answer. It
may provide suboptimal solutions for unsorted input with varying value-to-weight ratios.

3. What is the time complexity solving knapsack problem using brute-force method?
Answer: The time complexity for solving the knapsack problem using a brute-force method is
exponential, specifically O(2^n), where 'n' represents the number of items.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

55
220283107008 GOHIL MAYUR

Experiment No: 10
Implement program for solution of Making Change problem using greedy design technique.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3

Objectives: (a) Understand greedy algorithm design method.


(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and if
we want to change for some amount N, many combinations are possible. {d1, d2, d5, d8}, {d0, d2,
d4}, {d0, d5, d7} can be considered as all feasible solutions if sum of their denomination is N. The
aim of making a change is to find a solution with a minimum number of coins / denominations.
Following are the steps to solve coin change problem using greedy design technique

1. Initialize a list of coin denominations in descending order.


2. Initialize a list of coin counts, where each count is initially 0.
3. While the remaining amount is greater than 0:
a. For each coin denomination in the list:
i. If the denomination is less than or equal to the remaining amount, add one
coin to the count and subtract the denomination from the remaining
amount.
ii. If the denomination is greater than the remaining amount, move on to the
next denomination.
4. Return the list of coin counts.

56
220283107008 GOHIL MAYUR

Implement the program based on above steps for the solution of make a change using greedy
problem.

Code:

#include <stdio.h>

void makeChange(int denominations[], int n, int amount) {


int coinCounts[n];

for (int i = 0; i < n; i++) {


coinCounts[i] = 0; // Initialize coin counts to 0
}

printf("Selected denominations: ");

for (int i = 0; i < n; i++) {


while (amount >= denominations[i]) {
amount -= denominations[i];
coinCounts[i]++;
}

if (coinCounts[i] > 0) {
printf("%d x %d ", denominations[i], coinCounts[i]);
}
}

printf("\n");
}

int main() {
int denominations[] = {25, 10, 5, 1}; // Example denominations in descending order
int n = sizeof(denominations) / sizeof(denominations[0]);
int amount;

printf("Enter the amount for making change: ");


scanf("%d", &amount);

makeChange(denominations, n, amount);

return 0;
}

Observations:
The provided fractional knapsack algorithm does not always return an optimal answer. It may
provide suboptimal solutions for unsorted input with varying value-to-weight ratios. the greedy
algorithm for the coin change problem returns optimal answers when coin denominations are

57
220283107008 GOHIL MAYUR

provided in descending order but may provide suboptimal solutions when denominations are
unsorted or in cases where the minimization of total coin value is crucial. The effectiveness of this
algorithm depends on the specific characteristics of the denominations and the problem it is
applied to.

Result

Conclusion: the greedy algorithm for the coin change problem is effective in returning optimal
solutions when coin denominations are arranged in descending order but may provide suboptimal
results when denominations are unsorted or when minimizing the total coin value is critical. Its
performance is closely tied to the specific characteristics of the denominations and the nature of
the problem, making it a valuable approach in certain scenarios but not universally optimal.

Quiz:
1. What is the time complexity of above knapsack algorithm?
Answer: The time complexity of the given Making Change algorithm depends on the specific
implementation, but it generally has a time complexity of O(N), where N is the amount for which
change is being calculated.

2. Does above algorithm always return optimal answer?


Answer: The above algorithm may not always return an optimal answer, especially when the
denominations are not provided in descending order.

3. What are some variations of the Making Change problem?


Answer: Some variations of the Making Change problem include the Coin Change problem,
where the goal is to find the total number of ways to make change for a given amount, and the
Minimum Coin Change problem, which aims to find the minimum number of coins needed to
make change.

4. What is the difference between the unbounded coin change problem and the limited coin
change problem?
Answer:The difference between the unbounded coin change problem and the limited coin change
problem is that in the unbounded version, you can use an unlimited number of each coin
denomination to make change, while in the limited version, you are constrained by a finite supply
of each coin denomination.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

58
220283107008 GOHIL MAYUR

Experiment No: 11

Implement program for Kruskal's algorithm to find minimum spanning tree.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3, CO6

Objectives: (a) Understand how to use Kruskal's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other
words, an MST is a subset of the edges of the graph that form a tree and have the smallest sum of
weights.

1. Sort all the edges in non-decreasing order of their weight.


2. Initialize an empty set of edges for the minimum spanning tree.
3. For each edge in the sorted order, add the edge to the minimum spanning tree if it does not
create a cycle in the tree. To check if adding the edge creates a cycle, you can use the
Union-Find algorithm or a similar method to keep track of the connected components of the
graph.
4. Continue adding edges until there are V-1 edges in the minimum spanning tree, where V is the
number of vertices in the graph.
5. Return the set of edges in the minimum spanning tree.
6. Implement the program based on above steps for the solution of fractional knapsack problem.

Kruskal’s Algorithm is as follow:

59
220283107008 GOHIL MAYUR

Implement program for Kruskal's algorithm to find minimum spanning tree.

#include <stdio.h>
#include <stdlib.h>

// Define a structure to represent an edge


struct Edge {
int src, dest, weight;
};

// Define a structure to represent a subset for the Union-Find algorithm


struct Subset {
int parent, rank;
};

// Function to find the set of an element 'i'


int find(struct Subset subsets[], int i) {
if (subsets[i].parent != i)
subsets[i].parent = find(subsets, subsets[i].parent);
return subsets[i].parent;
}

// Function to perform union of two sets


void unionSets(struct Subset subsets[], int x, int y) {
int xroot = find(subsets, x);
int yroot = find(subsets, y);

if (subsets[xroot].rank < subsets[yroot].rank)


subsets[xroot].parent = yroot;
else if (subsets[xroot].rank > subsets[yroot].rank)
subsets[yroot].parent = xroot;
else {
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
}
}

// Comparator function to sort edges based on their weights


int compareEdges(const void *a, const void *b) {
return ((struct Edge*)a)->weight - ((struct Edge*)b)->weight;
}

// Kruskal's algorithm to find Minimum Spanning Tree


void kruskalMST(struct Edge edges[], int V, int E) {
// Initialize an array of subsets for Union-Find
struct Subset* subsets = (struct Subset*)malloc(V * sizeof(struct Subset));

// Initialize subsets
for (int i = 0; i < V; i++) {
subsets[i].parent = i;
subsets[i].rank = 0;
}

60
220283107008 GOHIL MAYUR

// Sort edges in non-decreasing order of their weight


qsort(edges, E, sizeof(struct Edge), compareEdges);

struct Edge result[V];


int e = 0; // An index variable to keep track of the edges added to the MST
int i = 0; // An index variable to iterate through sorted edges

// Build the MST


while (e < V - 1 && i < E) {
struct Edge next_edge = edges[i++];
int x = find(subsets, next_edge.src);
int y = find(subsets, next_edge.dest);

// If including this edge does not create a cycle, add it to the result and increment the index
if (x != y) {
result[e++] = next_edge;
unionSets(subsets, x, y);
}
}

// Print the edges of the MST


printf("Edges in the Minimum Spanning Tree:\n");
for (i = 0; i < e; i++) {
printf("%d -- %d (weight %d)\n", result[i].src, result[i].dest, result[i].weight);
}

free(subsets);
}

int main() {
int V = 4; // Number of vertices
int E = 5; // Number of edges
struct Edge edges[] = {
{0, 1, 10},
{0, 2, 6},
{0, 3, 5},
{1, 3, 15},
{2, 3, 4}
};

kruskalMST(edges, V, E);

return 0;
}

Observations:
Kruskal's algorithm consistently returns a minimum spanning tree when all edge weights are non-
negative and unique. However, it may provide suboptimal results in the presence of negative
weights and may not always select the same edge when dealing with ties, potentially leading to
different MSTs in such cases.

61
220283107008 GOHIL MAYUR

Result

Conclusion:
Kruskal's algorithm for finding the Minimum Spanning Tree reliably returns the optimal solution
in graphs with non-negative, unique edge weights, ensuring the minimum total weight. However,
it may produce suboptimal results in graphs containing negative edge weights, potentially leading
to a Minimum Spanning Forest. Additionally, when dealing with ties in edge weights, the
algorithm may not consistently select the same edge, introducing variability in the MST solution.
Therefore, its effectiveness depends on the specific characteristics of the graph and the weights of
its edges.

Quiz:
1. What is the time complexity of krushkal’s algorithm?
Answer: The time complexity of Kruskal's algorithm is O(E log E), where 'E' represents the
number of edges in the graph.

2. Does above krushkal’s algorithm always return optimal answer?


Answer: Kruskal's algorithm returns the optimal answer when all edge weights are non-negative
and unique. However, it may not return the correct MST in the presence of negative edge weights
or when dealing with ties in edge weights.

3. What data structure is typically used to keep track of the connected components in Kruskal's
algorithm?
Answer: A disjoint-set data structure (typically implemented as a union-find data structure) is
used to keep track of the connected components in Kruskal's algorithm.

4. When does Kruskal's algorithm stop adding edges to the minimum spanning tree?
Answer: Kruskal's algorithm stops adding edges to the minimum spanning tree when the number
of edges added reaches (V - 1), where 'V' is the number of vertices in the graph, ensuring the
formation of a spanning tree.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: "Introduction to Algorithms" by Thomas H. Cormen, Charles


E. Leiserson, Ronald L. Rivest, and Clifford Stein

62
220283107008 GOHIL MAYUR

Experiment No: 12
Implement a program for Prim's algorithm to find a minimum spanning tree.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO3, CO6

Objectives: (a) Understand how to use Prim's algorithm to find the minimum spanning tree.
(b) Solve the optimization based problem.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other
words, an MST is a subset of the edges of the graph that form a tree and have the smallest sum of
weights.

Prim’s Algorithm is as follow:

63
220283107008 GOHIL MAYUR

Implement program for Prim's algorithm to find minimum spanning tree.

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>

#define INF 9999999

int minKey(int key[], bool mstSet[], int V) {


int min = INF;
int min_index;

for (int v = 0; v < V; v++) {


if (!mstSet[v] && key[v] < min) {
min = key[v];
min_index = v;
}
}

return min_index;
}

void printMST(int parent[], int graph[][V], int V) {


printf("Edge \tWeight\n");
for (int i = 1; i < V; i++) {
printf("%d - %d \t%d\n", parent[i], i, graph[i][parent[i]]);
}
}

void primMST(int graph[][V], int V) {


int parent[V];
int key[V];
bool mstSet[V];

for (int i = 0; i < V; i++) {


key[i] = INF;
mstSet[i] = false;
}

key[0] = 0;
parent[0] = -1;

for (int count = 0; count < V - 1; count++) {


int u = minKey(key, mstSet, V);
mstSet[u] = true;

for (int v = 0; v < V; v++) {


if (graph[u][v] && !mstSet[v] && graph[u][v] < key[v]) {
parent[v] = u;
key[v] = graph[u][v];
}
}
}

64
220283107008 GOHIL MAYUR

printMST(parent, graph, V);


}

int main() {
int V = 5; // Number of vertices
int graph[V][V] = {
{0, 2, 0, 6, 0},
{2, 0, 3, 8, 5},
{0, 3, 0, 0, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0}
};

primMST(graph, V);

return 0;
}

Observations:
The provided Prim's algorithm consistently returns the minimum spanning tree (MST) when
applied to various inputs with non-negative edge weights.

Result

Conclusion: the Prim's algorithm consistently returns the minimum spanning tree in graphs with
non-negative edge weights, ensuring the optimal solution. It effectively constructs the MST by
iteratively selecting edges of minimum weight, making it a reliable choice for solving this
problem. However, it may not be suitable for graphs with negative edge weights, as it's designed
for non-negative weighted graphs.

Quiz:

1. What is the time complexity of Prim’s algorithm?


Answer: The time complexity of Prim's algorithm is O(V^2), where 'V' is the number of vertices.
In its optimized form using a binary heap, the time complexity is O(E + V log V), where 'E' is the
number of edges.

65
220283107008 GOHIL MAYUR

2. Does above Prim’s algorithm always return optimal answer?


Answer: Prim's algorithm consistently returns the optimal answer when applied to graphs with
non-negative edge weights.

3. When does Prim's algorithm stop adding edges to the minimum spanning tree?
Answer: Prim's algorithm stops adding edges to the minimum spanning tree when all vertices are
included in the tree.

4. What data structure is typically used to keep track of the vertices in Prim's algorithm?
Answer: In Prim's algorithm, a data structure typically used to keep track of the vertices is a
priority queue or a binary heap.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

66
220283107008 GOHIL MAYUR

Experiment No: 13
Implement DFS and BFS graph traversal techniques and write its time complexities.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO6

Objectives: (a) Understand Graph traversal techniques.


(b) Visit all nodes of the graph.
(c) Find the time complexity of the algorithm.

Equipment/Instruments: Computer System, Any C language editor

Theory:

Depth First Search is a graph traversal algorithm that explores as far as possible along each branch
before backtracking. It is used to search for a node or a path in a graph, and is implemented
recursively or iteratively.
The algorithm starts at a specified node and visits all the nodes in the graph by exploring each
branch as far as possible before backtracking to the previous node. When a node is visited, it is
marked as visited to prevent loops.

Consider the following steps for the implementation of DFS algorithm:


1. Create an empty stack and push the starting node onto it.
2. Mark the starting node as visited.
3. While the stack is not empty, pop a node from the stack and mark it as visited.
4. For each adjacent node to the current node, if the adjacent node has not been visited, mark it
as visited and push it onto the stack.
5. After processing all the adjacent nodes, you can do something with the current node, such as
printing it or storing it.
6. Repeat steps 3 to 5 until the stack is empty.

Consider the following steps for the implementation of BFS algorithm:

1. Create a queue Q and a set visited.


2. Add the starting node to the queue Q and mark it as visited.
3. While the queue is not empty:
a. Dequeue a node from the queue Q and process it.
b. For each adjacent node of the dequeued node:
i. If the adjacent node has not been visited, mark it as visited and enqueue it into
the queue Q.

67
220283107008 GOHIL MAYUR

Implement DFS and BFS graph traversal techniques and write its time complexities.

BFS graph traversal


#include <stdio.h>
#include <stdlib.h>
#include <time.h>

struct node {
int vertex;
struct node* next;
};

struct node* createNode(int v);

struct Graph {
int numVertices;
int* visited;
struct node** adjLists;
};

// DFS algorithm with execution time measurement


void DFS(struct Graph* graph, int vertex) {
struct node* adjList = graph->adjLists[vertex];
struct node* temp = adjList;

graph->visited[vertex] = 1;
printf("Visited %d\n", vertex);

while (temp != NULL) {


int connectedVertex = temp->vertex;

if (graph->visited[connectedVertex] == 0) {
DFS(graph, connectedVertex);
}
temp = temp->next;
}
}

// Create a node
struct node* createNode(int v) {
struct node* newNode = malloc(sizeof(struct node));
newNode->vertex = v;
newNode->next = NULL;
return newNode;
}

// Create a graph
struct Graph* createGraph(int vertices) {
struct Graph* graph = malloc(sizeof(struct Graph));
graph->numVertices = vertices;

graph->adjLists = malloc(vertices * sizeof(struct node*));


graph->visited = malloc(vertices * sizeof(int));

68
220283107008 GOHIL MAYUR

int i;
for (i = 0; i < vertices; i++) {
graph->adjLists[i] = NULL;
graph->visited[i] = 0;
}
return graph;
}

// Add an edge
void addEdge(struct Graph* graph, int src, int dest) {
// Add an edge from src to dest
struct node* newNode = createNode(dest);
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;

// Add an edge from dest to src


newNode = createNode(src);
newNode->next = graph->adjLists[dest];
graph->adjLists[dest] = newNode;
}

// Print the graph


void printGraph(struct Graph* graph) {
int v;
for (v = 0; v < graph->numVertices; v++) {
struct node* temp = graph->adjLists[v];
printf("\nAdjacency list of vertex %d: ", v);
while (temp) {
printf("%d -> ", temp->vertex);
temp = temp->next;
}
printf("NULL\n");
}
}

int main() {
struct Graph* graph = createGraph(4);
addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 2);
addEdge(graph, 2, 3);

printGraph(graph);

clock_t start, end;


double cpu_time_used;

start = clock();
DFS(graph, 2);
end = clock();

cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;

69
220283107008 GOHIL MAYUR

printf("DFS Execution Time: %.2f ms\n", cpu_time_used);

return 0;
}

DFS graph traversal

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

struct node {
int vertex;
struct node* next;
};

struct node* createNode(int v);

struct Graph {
int numVertices;
int* visited;
struct node** adjLists;
};

// DFS algorithm with execution time measurement


void DFS(struct Graph* graph, int vertex) {
struct node* adjList = graph->adjLists[vertex];
struct node* temp = adjList;

graph->visited[vertex] = 1;
printf("Visited %d\n", vertex);

while (temp != NULL) {


int connectedVertex = temp->vertex;

if (graph->visited[connectedVertex] == 0) {
DFS(graph, connectedVertex);
}
temp = temp->next;
}
}

// Create a node
struct node* createNode(int v) {
struct node* newNode = malloc(sizeof(struct node));
newNode->vertex = v;
newNode->next = NULL;
return newNode;
}

// Create a graph

70
220283107008 GOHIL MAYUR

struct Graph* createGraph(int vertices) {


struct Graph* graph = malloc(sizeof(struct Graph));
graph->numVertices = vertices;

graph->adjLists = malloc(vertices * sizeof(struct node*));


graph->visited = malloc(vertices * sizeof(int));

int i;
for (i = 0; i < vertices; i++) {
graph->adjLists[i] = NULL;
graph->visited[i] = 0;
}
return graph;
}

// Add an edge
void addEdge(struct Graph* graph, int src, int dest) {
// Add an edge from src to dest
struct node* newNode = createNode(dest);
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;

// Add an edge from dest to src


newNode = createNode(src);
newNode->next = graph->adjLists[dest];
graph->adjLists[dest] = newNode;
}

// Print the graph


void printGraph(struct Graph* graph) {
int v;
for (v = 0; v < graph->numVertices; v++) {
struct node* temp = graph->adjLists[v];
printf("\nAdjacency list of vertex %d: ", v);
while (temp) {
printf("%d -> ", temp->vertex);
temp = temp->next;
}
printf("NULL\n");
}
}

int main() {
struct Graph* graph = createGraph(4);
addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 2);
addEdge(graph, 2, 3);

printGraph(graph);

clock_t start, end;


double cpu_time_used;

71
220283107008 GOHIL MAYUR

start = clock();
DFS(graph, 2);
end = clock();

cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;


printf("DFS Execution Time: %.2f ms\n", cpu_time_used);

return 0;
}

Observations:
In BFS, the algorithm traverses the node closest to the starting node first, while in DFS, it
explores as deeply as possible along each branch before backtracking.

Result
BFS

DFS

72
220283107008 GOHIL MAYUR

Conclusion: DFS and BFS are essential graph traversal algorithms, with BFS exploring level by
level for shortest paths and DFS diving deep before backtracking. Both have linear time
complexities, making them versatile for graph-related tasks. The choice depends on the problem's
nature.

Quiz:

1. What data structure is typically used in the iterative implementation of DFS and BFS?
Answer: In the iterative implementation of DFS and BFS, a stack is typically used for DFS, and a
queue is used for BFS.

2. What is the time complexity of DFS on a graph with V vertices and E edges?
Answer: The time complexity of DFS on a graph with V vertices and E edges is O(V + E), where
'V' represents the number of vertices, and 'E' is the number of edges.

3. What is the time complexity of BFS on a graph with V vertices and E edges?
Answer: The time complexity of BFS on a graph with V vertices and E edges is also O(V + E).

4. In which order are nodes visited in a typical implementation of BFS?


Answer: In a typical implementation of BFS, nodes are visited in a breadth-first order, meaning
nodes at the same level of the graph are visited before moving to the next level.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: “Fundamentals of Algorithms”by E. Horowitz et al.

73
220283107008 GOHIL MAYUR

Experiment No: 14
Implement Rabin-Karp string matching algorithm.

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving

Relevant CO: CO4

Objectives: (a) Find all occurrences of a pattern in a given text.


(b) Improve the performance of the brute force algorithm.
(c) Find a pattern in a given text with less time complexity in the average case.

Equipment/Instruments: Computer System, Any C language editor

Theory:
It is a string searching algorithm that is named after its authors Richard M. Carp and Michael O.
Rabin. This algorithm is used to find all the occurrences of a given pattern ‘P’’ in a given string
‘S’ in O(Ns + Np) time in average case, where ‘Ns’ and ‘Np’ are the lengths of ‘S’’ and ‘P’,
respectively.

Let’s take an example to make it more clear.


Assume the given string S = “cxyzghxyzvjkxyz” and pattern P = “xyz” and we have to find all the
occurrences of ‘P’ in ‘S’.

We can see that “xyz” is occurring in “cxyzghxyzvjkxyz” at three positions. So, we have to print
that pattern ‘P’ is occurring in string ‘S’ at indices 1, 6, and 12.

Naive Pattern Searching (brute force) algorithm slides the pattern over text one by one and checks
for a match. If a match is found, then slide by 1 again to check for subsequent matches. This
approach has a time complexity of O(P* (S-P)).

The Rabin-Karp algorithm starts by computing, at each index of the text, the hash value of the
string starting at that particular index with the same length as the pattern. If the hash value of that
equals to the hash value of the given pattern, then it does a full match at that particular index.

Rabin Karp algorithm first computes the hash value of pattern P and first Np characters from S. If
hash values are same, we check the equality of actual strings. If the pattern is found, then it is
called hit. Otherwise, it is called a spurious hit. If hash values are not same, no need to compare
actual strings.

Steps of Rabin-Karp algorithm are as below:

1. Calculate the hash value of the pattern: The hash value of the pattern is calculated using a hash
function, which takes the pattern as input and produces a hash value as output.

74
220283107008 GOHIL MAYUR

2. Calculate the hash values of all the possible substrings of the same length in the text: The hash
values of all the possible substrings of the same length as the pattern are calculated using the
same hash function.
3. Compare the hash value of the pattern with the hash values of all the possible substrings: If a
match is found, the algorithm checks the characters of the pattern and the substring to verify
that they are indeed equal.
4. Move on to the next possible substring: If the characters do not match, the algorithm moves on
to the next possible substring and repeats the process until all possible substrings have been
compared.

Implement the Rabin-Karp algorithm based on above steps and give different input text and
pattern to check its correctness. Also, find the time complexity of your implemented
algorithm.

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
// Function to calculate the hash value of a string
unsigned long long calculateHash(char *str, int len) {
unsigned long long hash = 0;
for (int i = 0; i < len; i++) {
hash = (hash * 256 + str[i]) % 101; // Use a prime number as modulo for better distribution
}
return hash;
}
// Function to implement the Rabin-Karp algorithm
void rabinKarp(char *text, char *pattern) {
int n = strlen(text);
int m = strlen(pattern);
int prime = 101; // Prime number for hashing
unsigned long long patternHash = calculateHash(pattern, m);
unsigned long long textHash = calculateHash(text, m);
for (int i = 0; i <= n - m; i++) {
if (patternHash == textHash) {
// If hash matches, check character by character
int j;
for (j = 0; j < m; j++) {
if (text[i + j] != pattern[j])
break;
}
if (j == m) {
// Pattern found at position i
printf("Pattern found at index %d\n", i);
}
}
if (i < n - m) {
textHash = (256 * (textHash - text[i] * (unsigned long long)(pow(256, m - 1))) + text[i +
m]) % prime;
if (textHash < 0)
textHash += prime;
}

75
220283107008 GOHIL MAYUR

}
}
int main() {
char text1[] = "abracadabra";
char pattern1[] = "abra";
char text2[] = "hello, this is a test text for Rabin-Karp algorithm";
char pattern2[] = "test";
char text3[] = "algorithm";
char pattern3[] = "algo";
clock_t start, end;
double cpu_time_used;
start = clock();
rabinKarp(text1, pattern1);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 1: %.2f ms\n", cpu_time_used);
start = clock();
rabinKarp(text2, pattern2);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 2: %.2f ms\n", cpu_time_used);
start = clock();
rabinKarp(text3, pattern3);
end = clock();
cpu_time_used = ((double)(end - start)) / CLOCKS_PER_SEC * 1000.0;
printf("Time taken for Pattern 3: %.2f ms\n", cpu_time_used);
return 0;
}

Observations:
The Rabin-Karp algorithm can effectively find a pattern in a given text. Its time complexity in the
worst case is O((n - m + 1) * m), where 'n' is the length of the text, and 'm' is the length of the
pattern. In the average case, it has a better time complexity due to its hashing approach.

Result

76
220283107008 GOHIL MAYUR

Conclusion: In conclusion, the Rabin-Karp algorithm is a practical method for pattern matching
within text. It efficiently computes hash values for substrings and significantly improves search
times. Its worst-case time complexity is dependent on the text and pattern lengths, making it a
valuable tool for text searching applications.

Quiz:
1. What is the Rabin-Karp algorithm used for?
Answer: The Rabin-Karp algorithm is used for pattern matching within text.

2. What is the time complexity of the Rabin-Karp algorithm in the average case?
Answer: The time complexity of the Rabin-Karp algorithm in the average case is O(n + m),
where 'n' is the length of the text, and 'm' is the length of the pattern.

3. What is the main advantage of the Rabin-Karp algorithm over the brute force algorithm for
string matching?
Answer: The main advantage of the Rabin-Karp algorithm over the brute force algorithm for
string matching is its ability to efficiently match patterns using hash values, resulting in faster
search times.

4. What is the worst-case time complexity of the Rabin-Karp algorithm?


Answer: The worst-case time complexity of the Rabin-Karp algorithm is O((n - m + 1) * m),
where 'n' is the length of the text, and 'm' is the length of the pattern.

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students: "Introduction to Algorithms" by Thomas H. Cormen, Charles


E. Leiserson, Ronald L. Rivest, and Clifford Stein

77

You might also like