PCV Lab Codes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

LAB-1

PRE-LAB:
1. What is computer vision?
2. Give some industrial applications of computer vision?
3. Why should we deal with images in computer vision?
4. What is Bayer color filter array?
IN LAB:
1. Perform a program for loading an image in an unchanged, color,
gray mode and display them until you press ‘a’ using OPENCV.
Also Write a code to save the three images using OPENCV?
2. Harry is applying to university so in that process he needs to
upload three types of certificates conditions given below. So, help
harry to do his task.
● Resize to specific width (450) and height (550)
● Resize only height (450)
● Downscale with resize ()
3. Implement a code to perform blending operation on two images
in which have 60% of image1 and 40% of image2.
POST LAB:
1. Danil wants to create an interactive Chatbot of color filter in
which it must perform the following operations and ask him to
save the produced image or not.
1 Hue
2 Saturation
3 HSV Image
4 Value
5 Green Channel
6 Doubled image
IN LAB ANSWERS:
1)import cv2
img = cv2.imread("lena.jpg",0)#grayscale
img1 = cv2.imread("lena.jpg",-1)#unchanged
img2= cv2.imread("lena.jpg",1)#color
print("press A to close the windows")
cv2.imshow("Hello World1",img)
cv2.imshow("Hello World2",img1)
cv2.imshow("Hello World3",img2)
cv2.waitKey(0)
cv2.destroyAllWindows()

a=input("enter how to save the input")


cv2.imwrite(a+"1"+".jpg",img)
cv2.imwrite(a+"2"+".png",img1)
cv2.imwrite(a+"3"+".png",img2)

2.1)
width = 550
height = 450
dim = (width, height)
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()

2.2)
import cv2
width = img.shape[1] # keep original width
height = 450
dim = (width, height)
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()
2.3)
import cv2
scale_percent = 60
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()

3)
img1 = cv2.imread('ml.png') img2 = cv2.imread('opencv_logo.jpg')
dst = cv2.addWeighted(img1,0.6,img2,0.4,0)
cv2.imshow('dst',dst) cv2.waitKey(0) cv2.destroyAllWindows()

POST LAB:
1)
#Color Filter, Color Space
# Hue : 0 - 180, Saturation : 0 - 255, Value : 0 - 255
import cv2

img = cv2.imread('lena.jpg')
B,G,R = cv2.split(img)
img_HSV = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
print("1","Hue")
print("2","Saturation")
print("3","HSV Image")
print("4","Value")
print("5","Green Channel")
print("6","Doubled image")
while(True):
print('enter the value')
b=int(input())
if(b==1):
cv2.imshow('Hue Channel ', img_HSV[:, :, 0])
cv2.waitKey(0)
cv2.destroyAllWindows()
elif(b==2):
cv2.imshow('Saturation ', img_HSV[:, :, 1])
cv2.waitKey(0)
cv2.destroyAllWindows()
elif(b==3):
cv2.imshow('HSV Image', img_HSV)
cv2.waitKey(0)
cv2.destroyAllWindows()
elif(b==4):
cv2.imshow('value ', img_HSV[:, :, 2])
cv2.waitKey(0)
cv2.destroyAllWindows()
elif(b==5):
cv2.imshow('Green Channel ', G)
cv2.waitKey(0)
cv2.destroyAllWindows()
elif(b==6):
larger = cv2.pyrUp(img)
cv2.imshow("doubled image",larger)
cv2.waitKey(0)
cv2.destroyAllWindows()
print("Do you want to continue")
q=input()
if(q=='N' or q=='n'):
break

**********************************************************
2.3
import cv2
img1 = cv2.imread('C:\\Users\\SATISH VARMA\\Desktop\\m1.jpg')
img2 = cv2.imread('C:\\Users\\SATISH VARMA\\Desktop\\m2.jpg')
dst = cv2.addWeighted(img1,0.3,img2,0.4,0)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
LAB-2
Color Image Processing Using OpenCV

PRE-LAB:
1. How is an RGB image represented? How many channels are
there in an RGB image? What are the ranges of pixel
intensities for each of these channels?
2. What are color-spaces ?
3. Describe the following color-spaces.
i) YCrCb Color Space
ii) HSV color space
iii) LAB color space
iv) CMYK color space
v) BGR Color Space
vi) Edge map of image
vii) Heat map of image
viii) Spectral Image map

IN-LAB:
Perform the following operations programmatically using OpenCV
and python:

1. Read an input color (RGB) image


2. Convert the read image to YCrCb color space
3. Convert the read image to HSV color space
4. Convert the read image to LAB color space
5. Compute the edge map of the read image using Laplacian
6. Compute the heat map of the read image
7. Compute the spectral image map of the read image
POST LAB:
1. Compute the image mask of the input (read) image you used in
the In-Lab.
2. Now superimpose the color image on top of the mask image
and display the result
3. Plot the resulting histograms of the original color input image,
the computed mask image and the superimposed (mask+color)
images.
In-LAB Solutions:
1)
# Python program to read image as RGB
import cv2
import matplotlib.pyplot as plt

# reads image as RGB


img = cv2.imread('g4g.png')

# shows the image


plt.imshow(img)

2)
# Python program to read image
# as YCrCb color space

# Import cv2 module


import cv2

# Reads the image


img = cv2.imread('g4g.png')

# Convert to YCrCb color space


img = cv2.cvtColor(img, cv2.COLOR_BGR2YCrCb)

# Shows the image


cv2.imshow('image', img)

cv2.waitKey(0)
cv2.destroyAllWindows()

3)
# Python program to read image
# as HSV color space

# Importing cv2 module


import cv2

# Reads the image


img = cv2.imread('g4g.png')

# Converts to HSV color space


img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

# Shows the image


cv2.imshow('image', img)

cv2.waitKey(0)
cv2.destroyAllWindows()

4)
# Python program to read image
# as LAB color space

# Importing cv2 module


import cv2

# Reads the image


img = cv2.imread('g4g.png')

# Converts to LAB color space


img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)

# Shows the image


cv2.imshow('image', img)

cv2.waitKey(0)
cv2.destroyAllWindows()

5)
# Python program to read image
# as EdgeMap

# Importing cv2 module


import cv2
# Reads the image
img = cv2.imread('g4g.png')

laplacian = cv2.Laplacian(img, cv2.CV_64F)


cv2.imshow('EdgeMap', laplacian)

cv2.waitKey(0)
cv2.destroyAllWindows()
)

6)

# Python program to visualize


# Heat map of image

# Importing matplotlib and cv2


import matplotlib.pyplot as plt
import cv2

# reads the image


img = cv2.imread('nemo.png')

# plot heat map image


plt.imshow(img, cmap ='hot')

7)

# Python program to visualize


# Spectral map of image

# Importing matplotlib and cv2


import matplotlib.pyplot as plt
import cv2

img = cv2.imread('g4g.png')
plt.imshow(img, cmap ='nipy_spectral')

POST-LAB SOLUTIONS:

import cv2
flags = [i for i in dir(cv2) if i.startswith('COLOR_')]
len(flags)
flags[40]
import matplotlib.pyplot as plt
import numpy as np

nemo = cv2.imread('./images/nemo0.jpg')
plt.imshow(nemo)
plt.show()
nemo = cv2.cvtColor(nemo, cv2.COLOR_BGR2RGB)
plt.imshow(nemo)
plt.show()
hsv_nemo = cv2.cvtColor(nemo, cv2.COLOR_RGB2HSV)
light_orange = (1, 190, 200)
dark_orange = (18, 255, 255)
mask = cv2.inRange(hsv_nemo, light_orange, dark_orange)
result = cv2.bitwise_and(nemo, nemo, mask=mask)
plt.subplot(1, 2, 1)
plt.imshow(mask, cmap="gray")
plt.subplot(1, 2, 2)
plt.imshow(result)
plt.show()
LAB 3
PRELAB:

1.What is the difference between image and videos?


2.When do we use cv2.waitKey(0) and cv2.waitKey(1)?
3.Define the terms Frame Rate and Resolution?
4.what is codec and gives a few examples of codec?
INLAB:

1.Jessy once visited the cctv room of her college.Then she saw the Live Video Feed Capture of
her entire college and she also became interested in Live Video Feed from Camera.
so help her in implementing the following requirements
-Start a Live Video Feed Capture from Camera or Webcam
-Create two Windows Both should display The Live Video Feed from camera
--Window 1.It should be color Video
--Window 2.It should be gray Scale and upside down i.e inverted
- Also Both of these windows should display the Date and Current time
-Here the Live Feed should stop as soon as you enter a character 'q'
-Also save the Window 1 video as two ".avi" file.one file should be slow speed of video playback
-and other one should be high speed of video playback

POSTLAB:

Now Jessy wants to implement Live Video Feed Capture and create window which will zoomin
and zoomout the live video and also rotate(360 degrees) the window continuously

IN LAB:

import cv2
import datetime
font=cv2.FONT_HERSHEY_SIMPLEX
fourcc=cv2.VideoWriter_fourcc(*'XVID')
out=cv2.VideoWriter('output.avi',fourcc,50.0,(640,480))
out1=cv2.VideoWriter('output1.avi',fourcc,1.0,(640,480))
cap=cv2.VideoCapture(0)
while True:
ret,frame=cap.read()
datet=str(datetime.datetime.now()) # flip left-right
gray = cv2.flip(frame,0) # flip up-down
frame=cv2.putText(frame,datet,(10,50),font,1,(0,255,255),2,cv2.LINE_AA)
gray=cv2.putText(gray,datet,(10,50),font,1,(0,255,255),2,cv2.LINE_AA)
gray=cv2.cvtColor(gray,cv2.COLOR_BGR2GRAY)
cv2.imshow("LIVE VIDEO FEED COLOR",frame)
cv2.imshow("LIVE VIDEO FEED GRAY",gray)
out.write(frame)
out1.write(frame)
if cv2.waitKey(1) & 0xFF==ord('q'):
break
cap.release()
cv2.destroyAllWindows()

POST LAB:
import cv2
import time

def main():
windowName = "Live Video Feed"
cv2.namedWindow(windowName)
cap = cv2.VideoCapture(0)

if cap.isOpened():
ret, frame = cap.read()
else:
ret = False

rows, columns, channels = frame.shape


angle = 0
#scale = 0.1
scale = 1

while True:

ret, frame = cap.read()

if angle == 360:
angle = 0

#angle = 0

# if scale < 2:
# scale = scale + 0.1
#if scale >= 2:
# scale = 0.1

print(scale)

R = cv2.getRotationMatrix2D((columns / 2, rows/1), angle, scale)

print(R)

output = cv2.warpAffine(frame, R, (columns, rows))

cv2.imshow(windowName, output)
angle = angle + 1
time.sleep(0.01)

if cv2.waitKey(1) == ord('q'):
break

cv2.destroyWindow(windowName)

cv2.destroyAllWindows()

cap.release()

if __name__ == "__main__":
main()
LAB-4
PRE-LAB:
1. Explain the operations done in edge detection in detail?
2. Explain Region of Interest (ROI)?
3. How many types of edges in image and why should we detect?
4. When should we use ROI?
IN-LAB:
1. Kushal is taking an extra class for edge detection there he got a
work from his teacher to do all the task below. Kushal alone can’t
do so help him to complete his work
i. Load the image.
ii. Convert the image into grayscale.
iii. Apply the gaussian blur to the image.
iv. Now apply the edge detection algorithm for the image.
v. Then display the detected image.
2. Rurik and Kamal are friends Rurik want to crop an image and in
which the region of image should select by kamal and then rurik
want to show that selected region separated from original to
kamal. So help Rurik to do such work?
POST-LAB:
Charles is a portrait painting artist. He wants to replicate a picture so
he wants to know the total number of edges present in the picture.
Can you help him find the number of edges in the picture with the
help of edge detection.

INLAB:
import cv2
import numpy as np

def sketch(image):

#Convert image to grayscale

img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

#Cleanup image using gaussian blur

img_gray_blur = cv2.GaussianBlur(img_gray,(5,5),0)

#Extract Edges

canny_edges = cv2.Canny(img_gray_blur, 10, 70)

#Do an invert binarize the image

ret, mask = cv2.threshold(canny_edges,70,255,cv2.THRESH_BINARY)

return mask

cap = cv2.VideoCapture(0)

while True:

ret, frame = cap.read()

cv2.imshow('Our Live Sketch', sketch(frame))

if cv2.waitKey(1)==13:

break

cap.release()

cv2.destroyAllWindows()

2.import cv2

showCrosshair = False
fromCenter = False
im=cv2.imread(r"C:\Users\karthik\Desktop\IP\OpenCV-master\OpenCV-
master\lena.jpg")
r = cv2.selectROI(im)
imCrop = im[int(r[1]):int(r[1]+r[3]), int(r[0]):int(r[0]+r[2])]
cv2.imshow("Image", imCrop)
cv2.waitKey(0)

post lab ans:


import cv2

import numpy as np

# Let's load a simple image with 3 black squares

image = cv2.imread('kow.jpg')

cv2.waitKey(0)

# Grayscale

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Find Canny edges

edged = cv2.Canny(gray, 30, 200)

cv2.waitKey(0)

# Finding Contours

# Use a copy of the image e.g. edged.copy()


# since findContours alters the image

contours, hierarchy = cv2.findContours(edged,

cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

cv2.imshow('Canny Edges After Contouring', edged)

cv2.waitKey(0)

print("Number of Contours found = " + str(len(contours)))

# Draw all contours

# -1 signifies drawing all contours

cv2.drawContours(image, contours, -1, (0, 255, 0), 3)

cv2.imshow('Contours', image)

cv2.waitKey(0)

cv2.destroyAllWindows()
LAB 5
Image Denoising
PRELAB
1.Mention Type of thresholding Techniques?
2.Mention the role of gui in real world?
3.Why we use adaptive thresholding?
4.why we use otsu thresholding?
INLAB

Create a Graphical User Interface using Tkinter as given below and this interface must take an
image ‘.jpg or .png file’ and it should perform Thresholding and Adaptive Thresholding .so
create
Various buttons like BINARY,BINARY_INV,TOZERO,TOZERO_INV,TRUNC,MEAN_C and
GAUSSIAN_C.Here after pressing the button the algorithm must be implemented on the image
and give resultant image on the new window

POSTLAB
1. Jackson is a mathematician guy now he want to apply tricks on image like he want to
rotate the image along all angles continuously until we press ‘q’ button in key bord.So
help him to perform his task using OPENCV?

Inlab:

import tkinter as tk

from tkinter import messagebox as msb


import cv2
img=cv2.imread('dheeraj.png',0)
HEIGHT=500
WIDTH=500
root = tk.Tk()
root.geometry('850x850')
root.title("THRESHOLDING")
canvas = tk.Canvas(root, height=HEIGHT, width=WIDTH)
canvas.pack()

def m(n):
a,b,c,d=127,255,11,2
if n==0:
ret,o=cv2.threshold(img,a,b,cv2.THRESH_BINARY)
cv2.imshow('Window1',o)
elif n==1:
ret,o=cv2.threshold(img,a,b,cv2.THRESH_BINARY_INV)
cv2.imshow('Window2',o)
elif n==2:
ret,o=cv2.threshold(img,a,b,cv2.THRESH_TOZERO)
cv2.imshow('Window3',o)
elif n==3:
ret,o=cv2.threshold(img,a,b,cv2.THRESH_BINARY_INV)
cv2.imshow('Window4',o)
elif n==4:
ret,o=cv2.threshold(img,a,b,cv2.THRESH_TRUNC)
cv2.imshow('Window5',o)
elif n==5:

th1=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINAR
Y,c,d)
cv2.imshow('Window6',th1)
elif n==6:

th2=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BI
NARY,c,d)
cv2.imshow('Window7',th2)
else:
print("sorry")

background_image = tk.PhotoImage(file='insta.png')
background_label = tk.Label(root, image=background_image)
background_label.place(relwidth=1, relheight=1)
frame = tk.Frame(root, bg='#000000', bd=5)
frame.place(relx=0.18, rely=0.23, relwidth=0.15, relheight=0.03, anchor='n')
label=tk.Label(frame,text="Normal Threshoulding", bg='#e62e00')
label.grid(row=0,column=1)

tk.Button(root,
text="BINARY",activebackground="green",bd=7,font=40,command=m(0)).place(x=100,y=220)
tk.Button(root,
text="BINARY_INV",activebackground="green",bd=7,font=40,command=m(1)).place(x=200,y=2
20)
tk.Button(root,
text="TOZERO",activebackground="green",bd=7,font=40,command=m(2)).place(x=350,y=220)
tk.Button(root,
text="BINARY_INV",activebackground="green",bd=7,font=40,command=m(3)).place(x=450,y=2
20)
tk.Button(root,
text="TRUNC",activebackground="green",bd=7,font=40,command=m(4)).place(x=600,y=220)

frame1= tk.Frame(root, bg='#000000', bd=5)


frame1.place(relx=0.18, rely=0.44, relwidth=0.15, relheight=0.03, anchor='n')
label2=tk.Label(frame1,text="AdaptiveThresholding", bg='#e62e00')
label2.grid(row=0,column=1)
tk.Button(root,
text="MEAN_C",activebackground="green",bd=7,font=40,command=m(5)).place(x=100,y=388)
tk.Button(root,
text="GAUSSIAN_C",activebackground="green",bd=7,font=40,command=m(6)).place(x=200,y=
388)

root.mainloop()

POST LAB:
import cv2

# read image as grey scale


img = cv2.imread('coins.png')
# get image height, width
(h, w) = img.shape[:2]
# calculate the center of the image
center = (w / 2, h / 2)

angle=0

scale = 1.0
# Perform the counter clockwise rotation holding at the center
while True:
if angle==360:
angle=90
else:
angle=angle+90
cv2.imshow('Original Image',img)
M = cv2.getRotationMatrix2D(center, angle, scale)
rotated90 = cv2.warpAffine(img, M, (h, w))
cv2.imshow('Original Image',rotated90)
if cv2.waitKey(100) & 0XFF==ord('q'):
break
cv2.destroyAllWindows()
LAB6
PRELAB

1.What is Image denoising?


2.Mention some image denoising techniques ?
3.For Grayscale images which image denoising method is used?
4.For color image which image denoising method is used?

INLAB
Image denoising:

Harsha took pictures on his trip to mumbai in which few pictures were blurred, he wants to
unblur the pictures using image denoising ,Guys can you help harsha in implementing image
denoising.
Harsha has four types of pictures.they are
(i) Grayscale image
(ii)Colour image
(iii)Grayscale image captured in a short period of time(use video capturing and take first 5
frames)
(iv)Colour image captured in a short period of time(use video capturing and take first 5
frames)

POSTLAB

Matlab:

IN LAB ANSWER:
Gray Scale:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('die.png')
dst = cv.fastNlMeansDenoising(img,None,10,10,7,21)
plt.subplot(121),plt.imshow(img)
plt.subplot(122),plt.imshow(dst)
plt.show()

cv.fastNlMeansDenoisingColored-works with a color image.

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('die.png')
dst = cv.fastNlMeansDenoisingColored(img,None,10,10,7,21)
plt.subplot(121),plt.imshow(img)
plt.subplot(122),plt.imshow(dst)
plt.show()

cv.fastNlMeansDenoisingMulti-works with image sequence captured in short


period of time (grayscale images)

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap = cv.VideoCapture('vtest.avi')
# create a list of first 5 frames
img = [cap.read()[1] for i in xrange(5)]
# convert all to grayscale
gray = [cv.cvtColor(i, cv.COLOR_BGR2GRAY) for i in img]
# convert all to float64
gray = [np.float64(i) for i in gray]
# create a noise of variance 25
noise = np.random.randn(*gray[1].shape)*10
# Add this noise to images
noisy = [i+noise for i in gray]
# Convert back to uint8
noisy = [np.uint8(np.clip(i,0,255)) for i in noisy]
# Denoise 3rd frame considering all the 5 frames
dst = cv.fastNlMeansDenoisingMulti(noisy, 2, 5, None, 4, 7, 35)
plt.subplot(131),plt.imshow(gray[2],'gray')
plt.subplot(132),plt.imshow(noisy[2],'gray')
plt.subplot(133),plt.imshow(dst,'gray')
plt.show()

cv.fastNlMeansDenoisingColoredMulti() - same as above, but for color images.


import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
cap = cv.VideoCapture('vtest.avi')
# create a list of first 5 frames
img = [cap.read()[1] for i in xrange(5)]
# convert all to grayscale
gray = [cv.cvtColor(i, cv.COLOR_BGR2GRAY) for i in img]
# convert all to float64
gray = [np.float64(i) for i in gray]
# create a noise of variance 25
noise = np.random.randn(*gray[1].shape)*10
# Add this noise to images
noisy = [i+noise for i in gray]
# Convert back to uint8
noisy = [np.uint8(np.clip(i,0,255)) for i in noisy]
# Denoise 3rd frame considering all the 5 frames
dst = cv.fastNlMeansDenoisingColoredMulti(noisy, 2, 5, None, 4, 7, 35)
plt.subplot(131),plt.imshow(gray[2],'gray')
plt.subplot(132),plt.imshow(noisy[2],'gray')
plt.subplot(133),plt.imshow(dst,'gray')
plt.show()
LAB-7
PRE-LAB:
1. What is an image histogram and why it is useful ?
2. What is histogram equalization?
3. Write the pseudo-code for histogram equalization.
4. Discuss some applications of histogram equalization.

IN LAB:
Solve the following questions programatically using opencv and
python.
1. Apply histogram equalization using the equalizeHist function.
2. Solve the problem of over-brightness using the adaptive
histogram equalization by using the OpenCV CLAHE method.
3. Plot the histogram of the original image, the histogram of the
histogram-equalized image using the equalizehist function, and
the histogram of the image after applying the CLAHE method.

POST-LAB:
1. Discuss at least two differences between the equalizeHist
method and the CLAHE method of histogram equalization.
2. Analyze the three histograms in question 3 and discuss the
differences between them.
3. Explain why the differences occur in postlab question 2.
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('rubix.jpg',-1)
#plt.imshow(img)
print(img.shape,img.shape[0]*img.shape[1])
cv2.imshow('abc',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imshow('red',img[:,:,0])
cv2.waitKey(0)
cv2.destroyAllWindows()
print(img)
color = ('r','g','b')
for i,val in enumerate(color):
hist = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(hist,color = val)
plt.show()
import numpy as np
k = np.ones((5,5),np.float32)/25
k
n = cv2.filter2D(img,-1,k)
img = cv2.cvtColor(n,cv2.COLOR_BGR2GRAY)
cv2.imshow('abc',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
equ = cv2.equalizeHist(img)
res = np.hstack((img,equ))
cv2.imshow('abc',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
plt.imshow(cv2.cvtColor(img, cv2.COLOR_GRAY2RGB))
plt.show()
hist = cv2.calcHist([img],[0],None,[256],[0,256])
plt.plot(hist,color = val)
plt.show()
plt.imshow(cv2.cvtColor(equ, cv2.COLOR_GRAY2RGB))
plt.show()
hist = cv2.calcHist([equ],[0],None,[256],[0,256])
plt.plot(hist,color = val)
plt.show()
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(img)
plt.imshow(cv2.cvtColor(cl1, cv2.COLOR_GRAY2RGB))
plt.show()
hist = cv2.calcHist([equ],[0],None,[256],[0,256])
plt.plot(hist,color = val)
plt.show()
LAB 8
PRE LAB:

What is Image Segmentation?

Why We Use image masking?

Different types of Image segmentation techniques?

Why we use watershed algorithm?

IN LAB:

kishore and varun are students of K L University they were willing to implement masking
technique that is to detect the blue color object using Webcam.help kishore and varun to
implement image masking?

Image masking matlab:

POST LAB:

Image segmentation:

1.Vikram's Professor gave an assignment to Implement Image segmentation using Watershed


Algorithm help Vikram to do his Assignment and help him to score good marks in his
assignment?

Image Segmentation MATLAB:

INLAB:

import cv2
import numpy as np

def main():
cap = cv2.VideoCapture(0)
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False

while ret:

ret, frame = cap.read()

hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

# Blue Color
low = np.array([100, 50, 50])
high = np.array([140, 255, 255])

# Green Color
# low = np.array([40, 50, 50])
# high = np.array([80, 255, 255])

# Red Color
#low = np.array([140, 150, 0])
#high = np.array([180, 255, 255])

image_mask = cv2.inRange(hsv, low, high)

output = cv2.bitwise_and(frame, frame, mask=image_mask)

cv2.imshow("Image mask", image_mask)


cv2.imshow("Original Webcam Feed", frame)
cv2.imshow("Color Tracking", output)
if cv2.waitKey(1) == 27: # exit on ESC
break

cv2.destroyAllWindows()
cap.release()

if __name__ == "__main__":
main()
POST LAB:(Watershed Algorithm)
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('coins.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)

# sure background area


sure_bg = cv2.dilate(opening,kernel,iterations=3)

# Finding sure foreground area


dist_transform = cv2.distanceTransform(opening,cv2.DIST_L2,5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)

# Finding unknown region


sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)

# Add one to all labels so that sure background is not 0, but 1


markers = markers+1

# Now, mark the region of unknown with zero


markers[unknown==255] = 0
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
LAB9
Pre Lab:
1.What Is Color Quantization?

2.The Main Reason Why We Perform Color Quantization?

3.Why We Use K Means Clustering For Color Quantization?

4.Write the formula for Euclidean distance for two RGB colors?
In Lab:
Two Best Friends Named Ramesh and Rajesh where Thinking to Reduce the number of colors in
an image but they were not familiar with that ,Help these best friends to Implement the Color
Quantization With OPENCV Using K means clustering algorithm for the given image where K
value is 3?

Post Lab:
The two friends were satisfied for the Color Quantization which you had performed but they had
doubt regarding how do it really happened prove them by doing this manually

Representing the above pixel image as numpy array:

Perform Certain Operations Manually:


1.Identify the number of clusters you need — ‘K’s value.
2.Select ‘K’ points (centers) within the range of the items in the list
3.Calculate distances of all items to each ‘K’ center.
4.Classify each items to a center with the shortest distance
Answer:

Inlab:

import numpy as np
import cv2

img = cv2.imread('home.jpg')
Z = img.reshape((-1,3))
Z = np.float32(Z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K=8
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))

cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()

Post Lab:
From the referenece:
https://medium.com/consonance/k-means-and-image-quantization-part-2-be0a62c50c11
LAB-10
PRE-LAB:
1. What are morphological transformations?
2. Explain the following morphological operations
i) Erosion
ii) Dilation
iii) Opening
iv) Closing
v) Morphological gradient

IN-LAB:
Perform the following morphological operations
programatically using python and opencv:
i) Erosion
ii) Dilation
iii) Opening
iv) Closing
v) Morphological gradient
vi) Top Hat
vii) Black Hat

POST-LAB
1. Plot the histogram of each morphed image (7 morphological
operations) in the in-lab activity.
2. Explain the differences between these histograms.
3. State the applications of these morphological operations
IN lab
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
img = cv.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
erosion = cv.erode(img,kernel,iterations = 1)
plt.imshow(erosion,cmap='binary')
plt.show()
hist = cv.calcHist([erosion],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
dilation = cv.dilate(img,kernel,iterations = 1)
plt.imshow(dilation,cmap='binary')
plt.show()
hist = cv.calcHist([dilation],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
opening = cv.morphologyEx(img, cv.MORPH_OPEN, kernel)
plt.imshow(opening,cmap='binary')
plt.show()
hist = cv.calcHist([opening],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
closing = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)
plt.imshow(closing,cmap='binary')
plt.show()
hist = cv.calcHist([closing],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
gradient = cv.morphologyEx(img, cv.MORPH_GRADIENT, kernel)
plt.imshow(gradient,cmap='binary')
plt.show()
hist = cv.calcHist([gradient],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
tophat = cv.morphologyEx(img, cv.MORPH_TOPHAT, kernel)
plt.imshow(tophat,cmap='binary')
plt.show()
hist = cv.calcHist([tophat],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
blackhat = cv.morphologyEx(img, cv.MORPH_BLACKHAT, kernel)
plt.imshow(blackhat,cmap='binary')
plt.show()
hist = cv.calcHist([blackhat],[0],None,[256],[0,256])
plt.plot(hist)
plt.show()
LAB-11
PRE-LAB:

1. What is a degradation image model?

2. How to restore a blurred image?


3. How do we apply deblurring using the Weiner filter?
4. What are the applications of Weiner filtering?

IN-LAB:
1. Choose any blurred image and perform deblurring and image
restoration by applying the Weiner filtering using opencv and
python.
2. Apply deblurring using the Lucy-Richardson technique in
opencv and python.

POST-LAB:
1. Plot the histogram of the original and the deblurred image
using each of the two techniques.
2. Analyze and discuss the differences between the histogram
plots obtained in post-lab 1.
#include <iostream>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
using namespace cv;
using namespace std;
void help();
void calcPSF(Mat& outputImg, Size filterSize, int R);
void fftshift(const Mat& inputImg, Mat& outputImg);
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat&
H);
void calcWnrFilter(const Mat& input_h_PSF, Mat& output_G, double
nsr);
const String keys =
"{help h usage ? | | print this message }"
"{image |original.JPG | input image name }"
"{R |53 | radius }"
"{SNR |5200 | signal to noise ratio}"
;
int main(int argc, char *argv[])
{
help();
CommandLineParser parser(argc, argv, keys);
if (parser.has("help"))
{
parser.printMessage();
return 0;
}
int R = parser.get<int>("R");
int snr = parser.get<int>("SNR");
string strInFileName = parser.get<String>("image");
if (!parser.check())
{
parser.printErrors();
return 0;
}
Mat imgIn;
imgIn = imread(strInFileName, IMREAD_GRAYSCALE);
if (imgIn.empty()) //check whether the image is loaded or not
{
cout << "ERROR : Image cannot be loaded..!!" << endl;
return -1;
}
Mat imgOut;
// it needs to process even image only
Rect roi = Rect(0, 0, imgIn.cols & -2, imgIn.rows & -2);
//Hw calculation (start)
Mat Hw, h;
calcPSF(h, roi.size(), R);
calcWnrFilter(h, Hw, 1.0 / double(snr));
//Hw calculation (stop)
// filtering (start)
filter2DFreq(imgIn(roi), imgOut, Hw);
// filtering (stop)
imgOut.convertTo(imgOut, CV_8U);
normalize(imgOut, imgOut, 0, 255, NORM_MINMAX);
imwrite("result.jpg", imgOut);
return 0;
}
void help()
{
cout << "2018-07-12" << endl;
cout << "DeBlur_v8" << endl;
cout << "You will learn how to recover an out-of-focus image by
Wiener filter" << endl;
}
void calcPSF(Mat& outputImg, Size filterSize, int R)
{
Mat h(filterSize, CV_32F, Scalar(0));
Point point(filterSize.width / 2, filterSize.height / 2);
circle(h, point, R, 255, -1, 8);
Scalar summa = sum(h);
outputImg = h / summa[0];
}
void fftshift(const Mat& inputImg, Mat& outputImg)
{
outputImg = inputImg.clone();
int cx = outputImg.cols / 2;
int cy = outputImg.rows / 2;
Mat q0(outputImg, Rect(0, 0, cx, cy));
Mat q1(outputImg, Rect(cx, 0, cx, cy));
Mat q2(outputImg, Rect(0, cy, cx, cy));
Mat q3(outputImg, Rect(cx, cy, cx, cy));
Mat tmp;
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat& H)
{
Mat planes[2] = { Mat_<float>(inputImg.clone()),
Mat::zeros(inputImg.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
dft(complexI, complexI, DFT_SCALE);
Mat planesH[2] = { Mat_<float>(H.clone()), Mat::zeros(H.size(),
CV_32F) };
Mat complexH;
merge(planesH, 2, complexH);
Mat complexIH;
mulSpectrums(complexI, complexH, complexIH, 0);
idft(complexIH, complexIH);
split(complexIH, planes);
outputImg = planes[0];
}
void calcWnrFilter(const Mat& input_h_PSF, Mat& output_G, double
nsr)
{
Mat h_PSF_shifted;
fftshift(input_h_PSF, h_PSF_shifted);
Mat planes[2] = { Mat_<float>(h_PSF_shifted.clone()),
Mat::zeros(h_PSF_shifted.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
dft(complexI, complexI);
split(complexI, planes);
Mat denom;
pow(abs(planes[0]), 2, denom);
denom += nsr;
divide(planes[0], denom, output_G);
}

Using blind deconvolution


Perform DFT on blurred image. Also perform Inverse DFT to verify whether
the process of DFT is correct or not. Make sure that the line for performing
the inverse is commented out as it overwrites the DFT array.

// Perform DFT of original image


dft_A = cvShowDFT(im, dft_M1, dft_N1,"original");
//Perform inverse (check)
cvShowInvDFT(im,dft_A,dft_M1,dft_N1,fp, "original");

2. Perform DFT on blur kernel. Also perform inverse DFT to get back original
contents. Make sure that the line for performing the inverse is commented
out as it overwrites the DFT array.

// Perform DFT of kernel


dft_B = cvShowDFT(k_image,dft_M1,dft_N1,"kernel");
//Perform inverse of kernel (check)
cvShowInvDFT(k_image,dft_B,dft_M1,dft_N1,fp, "kernel");

3. Multiply the DFT of image with the complex conjugate of the DFT of the blur
kernel
// Multiply numerator with complex conjugate
dft_C = cvCreateMat( dft_M1, dft_N1, CV_64FC2 );
4. Compute A**2 + B**2
// Split Real and imaginary parts
cvSplit( dft_B, image_ReB, image_ImB, 0, 0 );
cvPow( image_ReB, image_ReB, 2.0);
cvPow( image_ImB, image_ImB, 2.0);
cvAdd(image_ReB, image_ImB, image_ReB,0);
5. Divide numerator with A**2 + B**2
//Divide Numerator/A^2 + B^2
cvDiv(image_ReC, image_ReB, image_ReC, 1.0);
cvDiv(image_ImC, image_ReB, image_ImC, 1.0);
6.Merge real and imaginary parts
// Merge Real and complex parts
cvMerge(image_ReC, image_ImC, NULL, NULL, complex_ImC);
7.Finally perform Inverse DFT
cvShowInvDFT(im, complex_ImC,dft_M1,dft_N1,fp,"deblur");
LAB12
PRELAB:

1. How does OpenCV face recognition work?

2. Define detectMultiscale module?

3. Brief explain about parameters in detectMultiscale module?

4.Real world applications of Face Detection using opencv?

INLAB:

Kowshik is so good at developing new applications .He wants to build a new application which is based on
openCV that recognizes the face and eyes, but kowshik doesn’t have all the packages required for it.so he asks
his friend praharsha for help and told him to perform face detection as per the instructions given below.

1) Use harcascade_frontal_face.xml file for detecting the face

2) Detect the face by indicating a square shape of green color.

3) Use harcascade_frontal_eye.xml for detecting the eyes.

4) Detect the eyes by indicating a square shape of blue color.

POST LAB:

1) Write the applications of face recognition and explain one of them and how you implement?

Inlab:
FACE RECOGNIZER
import cv2

cam = cv2.VideoCapture(0)

detector=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

Id=input('enter your id')

sampleNum=0

while(True):
ret, img = cam.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = detector.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in faces:

cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

#incrementing sample number

sampleNum=sampleNum+1

#saving the captured face in the dataset folder

cv2.imwrite("C:\\Users\\lenovo\\Desktop\\face\\User."+Id +'.'+ str(sampleNum) + ".jpg", gray[y:y+h,x:x+w])

cv2.imshow('frame',img)

#wait for 100 miliseconds

if cv2.waitKey(100) & 0xFF == ord('q'):

break

# break if the sample number is morethan 20

elif sampleNum>20:

break

cam.release()

cv2.destroyAllWindows()

Face Training
import os

import cv2

import numpy as np

from PIL import Image


recognizer = cv2.face.LBPHFaceRecognizer_create();

path='dataset'

def getImagesWithID(path):

imagePaths=[os.path.join(path,f) for f in os.listdir(path)]

faces=[]

IDs=[]

for i in imagePaths:

faceImg=Image.open(i).convert('L')

faceNP=np.array(faceImg,'uint8')

ID=int(os.path.split(i)[-1].split('.')[1])

faces.append(faceNP)

print(ID)

IDs.append(ID)

cv2.imshow("training",faceNP)

cv2.waitKey(10)

return IDs,faces

Ids,faces=getImagesWithID(path)

recognizer.train(faces,np.array(Ids))

recognizer.save('recognizer/trainingdata.yml')

cv2.destroyAllWindows()

Face Detect
import cv2

import numpy as np
faceDetect=cv2.CascadeClassifier('kowshik.xml');

cam=cv2.VideoCapture(0)

rec = cv2.face.LBPHFaceRecognizer_create()

rec.read(r"C:\Users\admin\Desktop\face\recognizer\trainingdata.yml")

id=0

font = cv2.FONT_HERSHEY_SIMPLEX

while(True):

ret,img=cam.read()

gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

faces=faceDetect.detectMultiScale(gray,1.3,5);

for(x,y,w,h) in faces:

cv2.rectangle(img, (x,y), (x+w,y+h),(0,0,255),2)

id,conf=rec.predict(gray[y:y+h,x:x+w])

if(id==1):

id="Sameer"

elif(id==2):

id="Kowshik"

elif(id==3):

id="praharsha"

elif(id==4):

id="karthik"

else:

id="unknown"

cv2.putText(img, str(id), (x,y+h), font, 1.0, (255,255,255));

cv2.imshow("Face",img);
if(cv2.waitKey(1)==ord('q')):

break;

cam.release()

cv2.destroyAllWindows()
LAB-13
PRE-LAB:
1. Explain in details what is hough transform by drawing
suitable diagrams?
2. Discuss in details the two following types of Hough
transform, standard and probabilistic hough transform
3. Discuss the applications of the hough transform
4. Discuss and explain the watershed segmentation algorithm
using pseudo-code

IN-LAB:
1. Implement the hough transform using python and opencv
2. Implement the watershed segmentation algorithm using
python and opencv

POST-LAB
1. Obtain the histograms of the original image and the image
obtained after the hough transform
2. Clearly explain the differences between the histograms
LAB14
PRELAB:

1) What is meant by object detection?

2) What are the algorithms of object detection?

3) Explain SIFT algorithm?

4) Explain SURF algorithm?

INLAB:

Karthik and dheeraj are two best friends who always want to do something crazy things. They want to perform
object detection. First they decide to train the object with some labels and then they want to test that particular
object. They got to know that there are packages called SIFT and SURF for performing their desired actions. But
they don’t know how to use that help them in achieving the following steps:

1) Train the model with a notebook

2) Test whether the model can detect the notebook or not?

3) Train the model with an Id card ?

4) Test whether the model can detect the Id card or not?

POST LAB:

1) Write the applications of object detection and explain one of them and how you implement?

Answer:

In lab:

import cv2

import numpy as np

MIN_MATCH_COUNT=30

detector=cv2.xfeatures2d.SIFT_create()
FLANN_INDEX_KDITREE=0

flannParam=dict(algorithm=FLANN_INDEX_KDITREE,tree=5)

flann=cv2.FlannBasedMatcher(flannParam,{})

trainImg=cv2.imread(r'C:\Users\lenovo\Desktop\object\TrainingData\trainImgs.jpg',0)

trainKP,trainDesc=detector.detectAndCompute(trainImg,None)

cam=cv2.VideoCapture(0)

while True:

ret, QueryImgBGR=cam.read()

QueryImg=cv2.cvtColor(QueryImgBGR,cv2.COLOR_BGR2GRAY)

queryKP,queryDesc=detector.detectAndCompute(QueryImg,None)

matches=flann.knnMatch(queryDesc,trainDesc,k=2)

goodMatch=[]

for m,n in matches:

if(m.distance<0.75*n.distance):

goodMatch.append(m)

if(len(goodMatch)>MIN_MATCH_COUNT):

tp=[]

qp=[]

for m in goodMatch:

tp.append(trainKP[m.trainIdx].pt)

qp.append(queryKP[m.queryIdx].pt)
tp,qp=np.float32((tp,qp))

H,status=cv2.findHomography(tp,qp,cv2.RANSAC,3.0)

h,w=trainImg.shape

trainBorder=np.float32([[[0,0],[0,h-1],[w-1,h-1],[w-1,0]]])

queryBorder=cv2.perspectiveTransform(trainBorder,H)

cv2.polylines(QueryImgBGR,[np.int32(queryBorder)],True,(0,255,0),5)

else:

print("Not Enough match found- %d/%d"%(len(goodMatch),MIN_MATCH_COUNT))

cv2.imshow('result',QueryImgBGR)

if cv2.waitKey(10)==ord('q'):

break

cam.release()

cv2.destroyAllWindows()

You might also like