Report Project Lv3
Report Project Lv3
Report Project Lv3
SCHOOL OF ENGINEERING
CAPSTONE PROJECT 3
September, 2023
Comment by Supervisor
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
Signature
i
Comment by Reviewer
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
.............................................................................................................................................
Signature
ii
ABSTRACT
Along with the outstanding development of science and technology, automation has
become one of the indispensable industries in the modern industry. This is a science that
contributes to the application of advances in science and technology to production in order
to improve product quality and reduce human labor. Creating conditions for society to
develop and improve human knowledge.
Thereby, it is possible to help people access the latest science and technology, apply
them to life to improve economic efficiency and labor efficiency. With the consent of the
Faculty of Automation and Mr. Tran Van Luan, we decided to choose the graduation topic:
“Design and implementation of classification and error detection system on bottles”.
Our goal when choosing this topic is to design a system to classify defective finished
products and bottles and identify those errors in order to control and control the production
process in the factory.
The result that we want is to be able to design a line to identify and classify defective
bottles by machine learning technology, aiming at the application and research of artificial
intelligence technologies in production. Through this project, we can master programming
and learn new technologies especially in machine learning and artificial intelligence.
Keywords: Computer Vision, Python, OpenCV, Cap Position, Liquid Level Inspection.
Label Detection
iii
ACKNOWLEDGEMENTS
We would like to express our gratitude to the individuals who have contributed to the
completion of this project.
Firstly, we would like to thank Mr. Tran Van Luan, our supervisor, for his guidance and
support throughout this term.
We are also grateful to our lecturers for providing us with the necessary facilities and
resources to complete this project under the best conditions.
Finally, we would like to thank our family for their unwavering support throughout our
university years.
iv
LIST OF ABBREVIATIONS
v
TABLE OF CONTENTS
ACKNOWLEDGEMENTS ......................................................................... iv
LIST OF ABBREVIATIONS....................................................................... v
Chapter 1: Introduction................................................................................ 1
1.1 Motivation.....................................................................................................................................................1
1.2.2 Feature extraction algorithm for fill level and cap inspection in
bottling machine ............................................................................................. 4
2.3 Grayscale.................................................................................................................................................... 11
vi
2.8 Closing......................................................................................................................................................... 15
4.3 Tkinter......................................................................................................................................................... 31
4.5 TCP/IP model, Link between Raspberry Pi 4 and PLC S7 1200 with Snap7.......................... 33
BIBLIOGRAPHY ................................................................................................................................................ 43
APENDIX ................................................................................................................................................................ 44
vii
LIST OF FIGURES
Figure 1.2.1.2 Input and output of normal and error label’s images ................................... 3
Figure 1.2.1.3 Input and output of normal and error label’s images ................................... 3
Figure 1.2.2.1 The system of filling level and cap inspection ............................................. 4
Figure 1.2.2.3 The image of a bottle with no cap and unfixed cap ..................................... 5
Figure 2.2 RGB color space with primary and secondary colors ...................................... 10
Figure 2.3 Original image and after applying lightness grayscale .................................... 11
Figure 4.2.1 Label the water level when there is no label on the bottle and bottle cap ..... 29
Figure 4.2.2 Label the bottle label and bottle cap when the water level is not enough ..... 29
Figure 4.2.3 Label the label when there is no liquid and bottle cap .................................. 30
Figure 4.2.4 Label all the targets cap, label and liquid ...................................................... 30
ix
LIST OF TABLES
x
Chapter 1: Introduction
1.1 Motivation
1
inefficiencies in the production process, such as bottlenecks, downtime, or
equipment failures. By addressing these issues, it is possible to improve
production efficiency, reduce cycle time, and increase throughput.
- Enhancing Safety: Errors in production lines can pose safety risks for workers,
equipment, or the environment. By detecting and classifying errors, it is possible
to address safety issues and implement measures to prevent accidents and
injuries.
Due to the importance of classification, its algorithms should be mentioned. The
classification algorithms are used to classify or categorize data based on a set of
predefined classes or categories. There are several common types of classification
algorithms but we only mention the one will be used: Neural networks are a set of
algorithms inspired by the structure and function of the human brain. They are used for
complex classification problems and work by creating layers of interconnected nodes that
process the data and produce the classification output.
In summary, error detection and classification system are critical in industrial and
production settings to ensure product quality, reduce costs, improve efficiency, ensure
compliance, and enhance safety.
2
The authors used image processing method to detect the status of canny edge is
aligned or unaligned.
Figure 1.2.1.2 Input and output of normal and error label’s images
For label detection: Deploy canny edge detector to obtain the strong edges of the
label → full scan to remove any unwanted pixels which may not constitute the edge →
calculate the distance between the minimum and maximum points on two edges of the
label and compare it to the aligned label → If both are parallel, then the distance would
be zero and the sticker is aligned, otherwise it is misaligned label.
Figure 1.2.1.3 Input and output of normal and error label’s images
3
For cap detection: The ROI, composed of a blue cap from the upper portion of the
bottle, is extracted by using hard threshold and binarization → Harris’ corner detector
technique is applied for detecting the corners of the cap → Draw the reference line to join
extreme corner points → Calculated the distance to distinguish whether the cap is seated or
not by comparing the distance with known threshold value of the seated cap.
The study was successful in detecting the cap and label of plastic bottles using the
Harris’ corner detector and Canny edge detection. While the study was able to show that
the algorithm can be used for label detection, the system still needs to be developed more
before it is used in manufacturing lines where products are constantly moving.
1.2.2 Feature extraction algorithm for fill level and cap inspection
in bottling machine
The project’s name: Feature Extraction Algorithm for Fill Level and Cap Inspection
in Bottling Machine - Leila Yazdi, Anton Satria Prabuwono, Ehsan Golkar, 2011. [2]
4
Figure 1.2.2.2 The original image and after processing
For liquid level: Take an image → Apply grayscale conversion → Use Canny
algorithm to find edges → Draw a horizontal line at the end of the bottle cap as a reference
line and another line parallel to the line reference to form the Reg1 region → The Reg2
region is the area from the bottom of the bottle cap to the reference water level’s line →
Determine the average line by the distance between the reference line and the bottom edge
of the Reg2 region → Compare the current average with the average of the sample bottle.
For cap detection: take a photo → Apply grayscale conversion --> Use Canny
algorithm to find edges → Draw a horizontal line at the end of the bottle cap as a reference
line and another line parallel to the line reference to form the Reg1 region → The Reg2
region is the area from the bottom of the bottle cap to the top of the cap. → Determine the
average line by the distance between the reference line and the bottom edge of the Reg2
region → Compare the current average with the average of the sample bottle.
Figure 1.2.2.3 The image of a bottle with no cap and unfixed cap
5
Figure 1.2.2.4 The possible errors occur with a bottle
6
1.3 Objectives
The automated visual inspection system comprises two primary subsystems. The
first subsystem, known as the image acquisition subsystem, relies on hardware
components. Its purpose is to convert the optical scene into numerical data, which is then
received by the processing platform. This subsystem consistently includes four key
elements: the camera, lens, lighting system, and processing platform. The second
subsystem, referred to as the image processing subsystem, operates on software principles.
It primarily employs image processing methods that are specifically developed to analyze
the acquired data and generate the final inspection result.
While systems utilizing image processing techniques for fill level monitoring have
been developed and made available in the industry, they tend to be costly and are
predominantly adopted by large manufacturing companies. For research and educational
purposes, we utilize only basic devices that are already in our possession or that can be
acquired.
7
Figure 1.3.2 Simple algorithm model
1.5 Organazation
Chapter 1: We introduce about our initial starting point of our project, any exsisting
project that relate to our current theory-based project, our desired goal, some research
methods and our working plans.
Chapter 2: We discuss about theory and algorithm of our system
Chapter 3: We present the hardware that used in the system
Chapter 4: We present the software that used in the system
8
1.6 Plans
Table 1. The Capstone 2 schedule
The expected Capstone 2 schedule in semester 3
Time Done by Details
2 weeks Duong Writing Python code that suits our desired goal
Quang Import the necessary libraries and begin training, optimizing CNN
2 weeks
Duong models
2 weeks Quang Writing report
1 weeks Duong Discussing with the supervisor for guidance and getting to work.
9
Chapter 2: Theoretical Basis
2.1 Pixel
2.2 RGB
RGB (red, green and blue) refers to a system representing the colors used on a digital display
screen. Red, green and blue can be combined in various proportions to obtain any color in
the visible spectrum.
The RGB model uses 8 bits each -- from 0 to 23 -- for red, green and blue colors. Each color
also has values ranging from 0 to 255. This translates into millions of colors -- 16,777,216
possible colors to be precise.
Figure 2.2 RGB color space with primary and secondary colors
10
2.3 Grayscale
Grayscale refers to a range of monochromatic shades, devoid of any discernible color. On a
display screen, each pixel in a grayscale image represents a certain level of light intensity,
from the lowest (black) to the highest (white). Grayscale carries only information about light
intensity, not color.
There’re some of common methods conversion from RGB to Grayscale but in this project
only use Lightless. [3]
-The lightness method takes the average value of the components with the highest and lowest
min(𝑅, 𝐺, 𝐵) + max (𝑅, 𝐺, 𝐵)
value : Grayscale =
2
11
2.4 Historam equalzation
12
So, as can be seen, Ps(s) is normalized distribution. It is able to say that equalization of the
histogram can be achieved by an assumed transfer function.
Note:
- L is the maximum value a pixel can achieve.
- Pr(r) is probability density function (pdf) of the image before equalization.
Ps(s) is pdf of the image after performing equalization and also an equalized
13
2.6 Thresholding
Gray-level thresholding is an efficient and widely used method for image segmentation. It
is especially powerful in combination with preprocessing steps such as background
illumination correction and top hat filtering, where the object and background classes are
well separated in gray-level.
Thresholding is the process of converting an input image into an output binary image
that is segmented. [4]
Note:
- T is the threshold
- g(i, j) = 1 for image elements of objects
- g(i, j) = 0 for image elements of the background (or vice versa).
14
Figure 2.7 Original image and after applying boundary extraction
2.8 Closing
Closing is a morphological operation that involves combining erosion and dilation to
create a powerful operator. When applied, it brings objects closer together, resulting in a
more cohesive image. Closing has a smoothing effect on contour areas. However, it also
has additional benefits such as merging small gaps, repairing minor breaks, eliminating
small holes, and filling in gaps within objects. [6]
A● B = (A ⊕ B) Θ B
A•B is a dilatation followed by erosion that occurs when an image 'A' is closed by a structural
element 'B.'
15
2.9 The algorithm of system
16
Chapter 3: Hardware Implementation
3.1 Hardware components
3.1.1 Raspberry pi 4B
Raspberry Pi is popularly used for real time Image or Video Processing, IoT based
applications and Robotics applications.
17
Raspberry Pi is more than computer as it provides access to the on-chip hardware i.e.
GPIOs for developing an application. By accessing GPIO, we can connect devices like LED,
motors, sensors, etc… and can control them too.
It has ARM based Broadcom Processor SoC along with on-chip GPU (Graphics
Processing Unit).
The CPU speed of Raspberry Pi varies from 700 MHz to 1.2 GHz. Also, it has on-
board SDRAM that ranges from 256 MB to 1 GB.
Raspberry Pi also provides on-chip SPI, I2C, I2S and UART modules.
19
application programs or updating firmware
Diagnose errors online/offline
20
3.1.4 Pneumatic solenoid valve SMC
21
3.1.5 Conveyor
22
3.1.6 Conveyor’s motor
23
3.2 Technology analysis
24
- Cylinder 1 pushes the product bottle cap error
- Cylinder 2 pushes product with sticker error
- Cylinder 3 pushes the product water level error in the bottle
• There are 4 types of defective products: bottle cap error, sticker error, water level error
+ High product with bottle cap error
+ Average product with sticker error
+ Low product with water level error
• Intermediate relay used to switch the contacts and protect the outputs of the PLC
• Computer controlled and monitored remotely by PLC software S7-300
Although the CPSBlock increases computation by 10%-20% but improves the accuracy.
Yolov4-tiny method uses the LeakyReLU function as activation function in CSPDarknet53-
tiny network to make the computation process more simple.
26
e
𝑗
- 𝐶𝑖 is the confidence score of the j th bounding box in the i th grid.
- Pi,j is merely a function of the object
-
𝑗 ̂𝑗 are the confidence score of predicted box and confidence score of
The 𝐶𝑖 and 𝐶𝑖
truth box.
- 𝑛𝑜𝑜𝑏𝑗 is a weight parameter.
- IOU is intersection over union between the boxes that are predicted bounding box
and truth bounding box
- 𝑤 𝑔𝑡 and ℎ 𝑔𝑡 are the truth width and height of the bounding box
- w and h are the predicted width and height of the bounding box
- 𝑝2 (𝑏, 𝑏 𝑔𝑡 ) denotes the Euclidean distance between the center points of predicted
bounding box and truth bounding box
- c is the minimum diagonal distance of box that can contain the predicted bounding
box and truth bounding box.
28
4.2 Image source use for training
Figure 4.2.1 Label the water level when there is no label on the bottle and bottle cap
Figure 4.2.2 Label the bottle label and bottle cap when the water level is not enough
29
Figure 4.2.3 Label the label when there is no liquid and bottle cap
Figure 4.2.4 Label all the targets cap, label and liquid
Using Tool Label Image to label each objects and target each error based on cap, label and
liquid.
30
4.3 Tkinter
31
4.4 Monitor and control with Tkinter
32
4.5 TCP/IP model, Link between Raspberry Pi 4 and PLC S7 1200 with
Snap7
a) Theoretical structure of TCP/IP communication standard
The idea to form the TCP/IP model was originated from the Internet Protocol Suite
in DARPA work in 1970. Over countless years of research and development by two
engineers Robert E. Kahn and Vinton Cerf with the support from many research groups. In
early 1978, the TCP/IP protocol was stabilized with the currently used standard protocol of
the Internet, the TCP/IP Version 4 model.
TCP/IP (Transmission Control Protocol/Internet Protocol), is a set of information
exchange protocols used to transmit and connect devices in the Internet. TCP/IP was
developed to make the network more reliable with automatic recovery.
− Step 2:
Use commands in python code to read and write data on PLC pins that need to be
processed and used.
def Manual_Command(self):
WriteMemory(plc, 10, 4, S7WLBit, 1)
WriteMemory(plc, 15, 0, S7WLBit, 0)
self.Auto_Button.configure(bg = "silver")
34
self.Manual_Button.configure(bg = "green")
def Auto_Command(self):
WriteMemory(plc, 10, 4, S7WLBit, 0)
WriteMemory(plc, 15, 0, S7WLBit, 1)
self.Auto_Button.configure(bg = "green")
self.Manual_Button.configure(bg = "silver")
def Start_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 1)
WriteMemory(plc, 12, 1, S7WLBit, 0)
WriteMemory(plc, 10, 0, S7WLBit, 0)
WriteMemory(plc, 10, 1, S7WLBit, 0)
WriteMemory(plc, 10, 2, S7WLBit, 0)
WriteMemory(plc, 10, 3, S7WLBit, 0)
self.Start_Button.configure(bg = "green")
def Stop_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 0)
WriteMemory(plc, 12, 1, S7WLBit, 1)
WriteMemory(plc, 10, 5, S7WLBit, 0)
WriteMemory(plc, 10, 6, S7WLBit, 0)
WriteMemory(plc, 10, 7, S7WLBit, 0)
WriteMemory(plc, 11, 0, S7WLBit, 0)
self.Start_Button.configure(bg = "silver")
35
PLC S7 1200 code to control hardware of project:
36
37
Figure 4.5.2 The PLC code
38
Chapter 5: Conclusion
5.1 Result and evaluation
Table 4. Our system evaluation
Method Average accuracy Processing time
1 Our proposed algorithm 90.7% 360 (ms)
2 YOLOv7-Tiny 97.9% 960 (ms)
YOLOv7 99.9% 4200 (ms)
39
40
Figure 5.1.2 The result for each error
41
5.2 Summary and Conclusion
In short, this project helps us to planning and assembling the hardware of our
model.
We also learn more about the supporting software for the project such as YOLOv4
Tiny, VNC Viewer, these software helps us to monitor and have a fast object detection
method.
Learning Python language for writing test code for our project.
Learning more about the process of checking bottles in industry.
42
BIBLIOGRAPHY
[1] M. Kazmi, B. Hafeez, H. R. Khan, and S. A. Qazi, “Machine-Vision-Based Plastic
Bottle Inspection for Quality Assurance †,” Eng. Proc., vol. 20, no. 1, pp. 1–5,
2022, doi: 10.3390/engproc2022020009.
[2] L. Yazdi, A. S. Prabuwono, and E. Golkar, “Feature extraction algorithm for fill
level and cap inspection in bottling machine,” Proc. 2011 Int. Conf. Pattern Anal.
Intell. Robot. ICPAIR 2011, vol. 1, no. April, pp. 47–52, 2011, doi:
10.1109/ICPAIR.2011.5976910.
[3] C. Kanan and G. W. Cottrell, “Color-to-grayscale: Does the method matter in image
recognition?,” PLoS One, vol. 7, no. 1, 2012, doi: 10.1371/journal.pone.0029740.
[4] V. H. and R. B. M. Sonka, Image processing, analysis, and machine vision.
Cengage Learning. 2014.
[5] S. S. Raseli and J. M. Ali, “Boundary extraction of 2D image,” J. Basic Appl. Sci.
Res., vol. 2, no. 5, pp. 5374–5376, 2012.
[6] Sunil Bhutada, Nakerakanti Yashwanth, Puppala Dheeraj, and Kethavath Shekar,
“Opening and closing in morphological image processing,” World J. Adv. Res. Rev.,
vol. 14, no. 3, pp. 687–695, 2022, doi: 10.30574/wjarr.2022.14.3.0576.
[7] Z. Jiang, L. Zhao, L. I. Shuaiyang, and J. I. A. Yanfei, “Real-Time Object Detection
Method For Embedded Devices,” arXiv, vol. 3, pp. 1–11, 2020, [Online]. Available:
https://arxiv.org/pdf/2011.04244v2
[8] Winder, S., & Roberts, A. (2018). Raspberry Pi Assembly Language RASPBIAN
Beginners: Hands On Guide. CreateSpace Independent Publishing Platform.
[9] Karpathy, A. (2016). Convolutional neural networks for visual recognition.
Stanford University.
[10] OpenCV Documentation, https://docs.opencv.org/
[11] YoloV4 on Google Colab, https://github.com/roboflow-ai/yolov4-google-colab
[12] Google Colaboratory, https://research.google.com/colaboratory/
[13] TCP/IP Model, https://www.totolink.vn
[14] Python-snap7 Documentation, https://python-snap7.readthedocs.io/en/latest/
43
APENDIX
Code of the project:
import cv2
import snap7
from snap7.util import *
from snap7.types import *
from tkinter import *
import tkinter as tkinter
from tkinter import messagebox
from PIL import ImageTk, Image
import numpy as np
import json
import os
import time
import traceback
from base64 import b16encode
from datetime import datetime, timedelta
from tkinter import ttk
import openpyxl
from openpyxl import Workbook
camera=cv2.VideoCapture(0)
plc = snap7.client.Client()
IP = '169.254.142.100'
RACK = 0
SLOT = 1
plc.connect(IP, RACK, SLOT)
def Current_Time():
return datetime.now().strftime("%d/%m/%Y %H:%M:%S")
def ReadMemory(plc, byte, bit, datatype):
result = plc.read_area(Areas['MK'], 0, byte, datatype)
if datatype == S7WLBit:
return get_bool(result, 0, bit)
elif datatype == S7WLByte or datatype == S7WLWord:
44
return get_int(result, 0)
elif datatype == S7WLReal:
return get_real(result, 0)
elif datatype == S7WLDWord:
return get_dword(result, 0)
else:
return None
def Enter_Command(self):
if messagebox.askquestion('Cảnh báo', 'Bạn muốn đăng nhập vào tài khoản?', icon=
'warning', parent = self.root) == "yes":
if len(self.Password_Variable.get()) > 0 and len(self.Account_Variable.get()) > 0:
if self.Password_Variable.get() == "admin123" and self.Account_Variable.get()
== "admin":
self.Frame.place_forget()
self.myStart()
else:
result = messagebox.showinfo('Thông báo', 'Tài khoản hoặc mật khẩu không
chính xác.', icon= 'warning', parent = self.root)
else:
result = messagebox.showinfo('Thông báo', 'Vui lòng nhập đầy đủ tài khoản và
mật khẩu.', icon= 'warning', parent = self.root)
else:
pass
def myStart(self):
self.Start_Frame = Frame(self.root, height = 720, width = 1080)
self.Start_Frame.place(x = 0, y =0)
self.BG_photo = ImageTk.PhotoImage(Image.open("./Image/background.png"))
self.BG_Label = Label(self.Start_Frame, image = self.BG_photo)
46
self.BG_Label.place(x=0, y=0)
self.ON_Photo = ImageTk.PhotoImage(Image.open("./Image/On.png"))
self.OFF_Photo = ImageTk.PhotoImage(Image.open("./Image/Off.png"))
self.Fail_Photo = ImageTk.PhotoImage(Image.open("./Image/Fail.png"))
self.Good_Photo = ImageTk.PhotoImage(Image.open("./Image/Good.png"))
def Auto_Command(self):
50
WriteMemory(plc, 10, 4, S7WLBit, 0)
WriteMemory(plc, 15, 0, S7WLBit, 1)
self.Auto_Button.configure(bg = "green")
self.Manual_Button.configure(bg = "silver")
def Start_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 1)
WriteMemory(plc, 12, 1, S7WLBit, 0)
WriteMemory(plc, 10, 0, S7WLBit, 0)
WriteMemory(plc, 10, 1, S7WLBit, 0)
WriteMemory(plc, 10, 2, S7WLBit, 0)
WriteMemory(plc, 10, 3, S7WLBit, 0)
self.Start_Button.configure(bg = "green")
def Stop_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 0)
WriteMemory(plc, 12, 1, S7WLBit, 1)
WriteMemory(plc, 10, 5, S7WLBit, 0)
WriteMemory(plc, 10, 6, S7WLBit, 0)
WriteMemory(plc, 10, 7, S7WLBit, 0)
WriteMemory(plc, 11, 0, S7WLBit, 0)
self.Start_Button.configure(bg = "silver")
def Run(self):
try:
self.Contours_X = []
self.Contours_Y = []
self.Contours_W = []
self.Contours = []
self.Liquid_State = 0
self.Lid_State = 0
self.Label_State = 0
_, self.frame = camera.read()
self.frame = cv2.cvtColor(self.frame , cv2.COLOR_BGR2RGB)
51
self.frame = cv2.resize(self.frame, (480,320))
self.frame = cv2.rotate(self.frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
img_hsv=cv2.cvtColor(self.frame, cv2.COLOR_BGR2HSV)
lower = np.array([0,50,50])
upper = np.array([179,255,200])
mask = cv2.inRange(img_hsv, lower, upper)
result = cv2.bitwise_and(self.frame, self.frame, mask=mask)
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
for i in range (0,len(contours)):
x, y, w, h = cv2.boundingRect(contours[i])
if h < 50 or w < 50:
mask[y:y+h, x:x+w] = 0
else:
cv2.rectangle(self.frame,(x,y),(x+w,y+h),(4, 200, 4),2)
self.Contours_X.append((x))
self.Contours_Y.append((y))
self.Contours_W.append((w))
self.Contours.append((x,y,w,h))
for i in range (0, len(self.Contours)):
if self.Contours[i][2] < 100 and self.Contours[i][1] < 250:
cv2.putText(self.frame, "Lid", (self.Contours[i][0],self.Contours[i][1]) ,
cv2.FONT_HERSHEY_SIMPLEX,0.75, (255,0,0), 2)
self.Lid_State = 1
else:
cv2.putText(self.frame, "Label", (self.Contours[i][0],self.Contours[i][1]) ,
cv2.FONT_HERSHEY_SIMPLEX,0.75, (255,0,0), 2)
self.Label_State = 1
lower = np.array([0,0,0])
upper = np.array([150,190,75])
mask = cv2.inRange(img_hsv, lower, upper)
result = cv2.bitwise_and(self.frame, self.frame, mask=mask)
52
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
for i in range (0, len(self.Contours)):
mask[self.Contours[i][1]:self.Contours[i][1]+self.Contours[i][3],
self.Contours[i][0]:self.Contours[i][0]+self.Contours[i][2]] = 0
for i in range (0,len(contours)):
x, y, w, h = cv2.boundingRect(contours[i])
if self.Lid_State == 1:
self.Lit_Label.configure(image = self.Good_Photo)
else:
self.Lit_Label.configure(image = self.Fail_Photo)
if self.Label_State == 1:
self.Coca_Label.configure(image = self.Good_Photo)
else:
self.Coca_Label.configure(image = self.Fail_Photo)
if ReadMemory(plc, 11, 6, S7WLBit) == True:
53
if time.time() - self.Timer > 3:
self.Result = ""
if self.Label_State == 0:
self.Result = self.Result + "Nhãn"
else:
pass
if self.Lid_State == 0:
self.Result = self.Result + " Nắp"
else:
pass
if self.Liquid_State == 0:
self.Result = self.Result + " Thể tích"
else:
pass
if self.Label_State == 0 or self.Lid_State == 0 or self.Liquid_State == 0:
WriteMemory(plc, 10, 5, S7WLBit, 1)
print("write")
if self.Liquid_State == 1 and self.Lid_State == 1 and self.Label_State == 1:
self.Result = self.Result + "Đạt"
self.Treeview_List.insert("","end", value = (str(self.Result),),
text=Current_Time())
WriteMemory(plc, 11, 6, S7WLBit, 0)
print(self.Result)
self.row = int(self.Sheet.cell(row = 1, column = 2).value) + 1
self.Sheet.cell(row = 1, column = 2).value = self.row
self.Sheet.cell(row = self.row,column = 1).value = Current_Time()
self.Sheet.cell(row = self.row,column = 2).value = self.Result
self.Workbook.save("/home/pi/Desktop/Project/Data.xlsx")
else:
pass
else:
self.Timer = time.time()
54
self.Display = Image.fromarray(self.frame)
self.Display = ImageTk.PhotoImage(self.Display)
self.my_Image1.configure(image = self.Display)
self.my_Image1.image=self.Display
except:
print(traceback.print_exc())
root.after(1, self.Run)
root=Tk()
app = myApp(root)
root.mainloop()
55