Report Project Lv3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

EASTERN INTERNATIONAL UNIVERSITY

SCHOOL OF ENGINEERING

CAPSTONE PROJECT 3

Design and Implement a System for Classifying and Detecting


Errors on Bottles
Tran Do Nhat Duong – IRN: 1631100015
Nguyen Le Thanh Quang – IRN: 1831100034

Submitted for the Degree of Engineer in Control – Automation and Control


Engineering

September, 2023
Comment by Supervisor

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

Binh Duong., ............................. ,2023

Signature

i
Comment by Reviewer

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

.............................................................................................................................................

Binh Duong, .............................. ,2023

Signature

ii
ABSTRACT

Along with the outstanding development of science and technology, automation has
become one of the indispensable industries in the modern industry. This is a science that
contributes to the application of advances in science and technology to production in order
to improve product quality and reduce human labor. Creating conditions for society to
develop and improve human knowledge.
Thereby, it is possible to help people access the latest science and technology, apply
them to life to improve economic efficiency and labor efficiency. With the consent of the
Faculty of Automation and Mr. Tran Van Luan, we decided to choose the graduation topic:
“Design and implementation of classification and error detection system on bottles”.
Our goal when choosing this topic is to design a system to classify defective finished
products and bottles and identify those errors in order to control and control the production
process in the factory.
The result that we want is to be able to design a line to identify and classify defective
bottles by machine learning technology, aiming at the application and research of artificial
intelligence technologies in production. Through this project, we can master programming
and learn new technologies especially in machine learning and artificial intelligence.
Keywords: Computer Vision, Python, OpenCV, Cap Position, Liquid Level Inspection.
Label Detection

iii
ACKNOWLEDGEMENTS

We would like to express our gratitude to the individuals who have contributed to the
completion of this project.
Firstly, we would like to thank Mr. Tran Van Luan, our supervisor, for his guidance and
support throughout this term.
We are also grateful to our lecturers for providing us with the necessary facilities and
resources to complete this project under the best conditions.
Finally, we would like to thank our family for their unwavering support throughout our
university years.

iv
LIST OF ABBREVIATIONS

OpenCV Open Source Computer Vision Library


YOLO You Only Look Once
CNN Convolutional Neural Network
SGD Stochastic Gradient Descent
GPU Graphic Processing Unit
TPU Tensor Processing Units
VNC Virtual Network Computing
GUI Graphical user interface

v
TABLE OF CONTENTS

ABSTRACT .................................................................................................. iii

ACKNOWLEDGEMENTS ......................................................................... iv

LIST OF ABBREVIATIONS....................................................................... v

TABLE OF CONTENTS .............................................................................. vi

LIST OF FIGURES.................................................................................... viii

LIST OF TABLES ........................................................................................ x

Chapter 1: Introduction................................................................................ 1
1.1 Motivation.....................................................................................................................................................1

1.2 Related works .............................................................................................................................................2

1.2.1 Machine vision-based plastic bottle inspection ............................ 2

1.2.2 Feature extraction algorithm for fill level and cap inspection in
bottling machine ............................................................................................. 4

1.3 Objectives ......................................................................................................................................................7

1.4 Research Methods ......................................................................................................................................8

1.5 Organazation ...............................................................................................................................................8

Chapter 2: Theoretical Basis ...................................................................... 10


2.1 Pixel .............................................................................................................................................................. 10

2.2 RGB ............................................................................................................................................................. 10

2.3 Grayscale.................................................................................................................................................... 11

2.4 Historam equalzation ............................................................................................................................. 12

2.5 Gaussian Blur ........................................................................................................................................... 13

2.6 Thresholding ............................................................................................................................................. 14

2.7 Boundary extraction............................................................................................................................... 14

vi
2.8 Closing......................................................................................................................................................... 15

2.9 The algorithm of system ........................................................................................................................ 16

Chapter 3: Hardware Implementation ..................................................... 17


3.1 Hardware components .......................................................................................................................... 17

3.1.1 Raspberry pi 4B ............................................................................... 17

3.1.2 PLC S7-1200 ..................................................................................... 18

3.1.3 Cylinder 2ty ...................................................................................... 20

3.1.4 Pneumatic solenoid valve SMC ...................................................... 21

3.1.5 Conveyor .......................................................................................... 22

3.1.6 Conveyor’s motor ............................................................................ 23

3.2 Technology analysis ................................................................................................................................ 24

3.3 Hardware wiring and model ....................................................................................................................... 25

Chapter 4: Software .................................................................................... 26


4.1 YOLOv4 Tiny .......................................................................................................................................... 26

4.1.1 Yolov4-tiny network structure ........................................................ 26

4.1.2 Yolov4-tiny prediction process ...................................................... 27

4.1.3 Yolov4-tiny loss function ................................................................ 27

4.2 Image source use for training............................................................................................................... 29

4.3 Tkinter......................................................................................................................................................... 31

4.4 Monitor and control with Tkinter ............................................................................................................ 32

4.5 TCP/IP model, Link between Raspberry Pi 4 and PLC S7 1200 with Snap7.......................... 33

Chapter 5: Conclusion ................................................................................ 39


5.1 Result and evaluation ............................................................................................................................. 39

5.2 Summary and Conclusion.................................................................................................................... 42

5.3 Future work............................................................................................................................................... 42

BIBLIOGRAPHY ................................................................................................................................................ 43

APENDIX ................................................................................................................................................................ 44

vii
LIST OF FIGURES

Figure 1.1 Classification with Raspberry Pi ........................................................................ 1

Figure 1.2.1.1 The plastic bottle inspection system............................................................. 2

Figure 1.2.1.2 Input and output of normal and error label’s images ................................... 3

Figure 1.2.1.3 Input and output of normal and error label’s images ................................... 3

Figure 1.2.2.1 The system of filling level and cap inspection ............................................. 4

Figure 1.2.2.2 The original image and after processing ...................................................... 5

Figure 1.2.2.3 The image of a bottle with no cap and unfixed cap ..................................... 5

Figure 1.2.2.4 The possible errors occur with a bottle ........................................................ 6

Figure 1.3.1 The automated visual inspection system ....................................................... 7

Figure 1.3.2 Simple algorithm model .................................................................................. 8

Figure 2.1 A pixel of an image .......................................................................................... 10

Figure 2.2 RGB color space with primary and secondary colors ...................................... 10

Figure 2.3 Original image and after applying lightness grayscale .................................... 11

Figure 2.4.Histogram equalization .................................................................................... 12

Figure 2.5 Normal image and blur image .......................................................................... 13

Figure 2.6 Original image and after applying thresholding ............................................... 14

Figure 2.7 Original image and after applying boundary extraction................................... 15

Figure 2.8 Image before and after applying closing .......................................................... 15

Figure 2.9 The algorithm of the system ............................................................................. 16

Figure 3.1.1 Raspberry Pi 4B ............................................................................................ 17

Figure 3.1.1 Raspberry Pi’s GPIO ..................................................................................... 17

Figure 3.1.2.1 PLC S7-1200 and modules ......................................................................... 18

Figure 3.1.2.1 Structure of Siemen CPU S7-1200 controller ............................................ 19

Figure 3.1.3 Cylinder 2 ty TN ........................................................................................... 20

Figure 3.1.4 Pneumatic solenoid valve SMC .................................................................... 21


viii
Figure 3.1.5 Conveyor ....................................................................................................... 22

Figure 3.1.6 DC Motor ...................................................................................................... 23

Figure 3.2 Actual model .................................................................................................... 24

Figure 3.3.1 Wiring diagram ............................................................................................. 25

Figure 3.3.2 Hardware design ............................................................................................ 25

Figure 4.1 YOLO ............................................................................................................... 26

Figure 4.1.1 The architecture of YOLOv4-Tiny network ................................................. 27

Figure 4.2.1 Label the water level when there is no label on the bottle and bottle cap ..... 29

Figure 4.2.2 Label the bottle label and bottle cap when the water level is not enough ..... 29

Figure 4.2.3 Label the label when there is no liquid and bottle cap .................................. 30

Figure 4.2.4 Label all the targets cap, label and liquid ...................................................... 30

Figure 4.3 Tkinter .............................................................................................................. 31

Figure 4.4 Monitor And Control Screen ............................................................................ 32

Figure 4.5.1 The TCP/IP stack .......................................................................................... 33

Figure 4.5.2 The PLC code ................................................................................................ 38

Figure 5.1.1 The hardware ................................................................................................. 39

Figure 5.1.2 The result for each error ................................................................................ 41

ix
LIST OF TABLES

Table 1. The Capstone 2 schedule ......................................................................................... 9

Table 2. The Capstone 3 schedule ......................................................................................... 9

Table 3. 12V DC motor specification .................................................................................. 23

Table 4. Our system evaluation ........................................................................................... 39

x
Chapter 1: Introduction
1.1 Motivation

Figure 1.1 Classification with Raspberry Pi


As for the current era, the use of machine is gradually replacing human labor. With
the development of automation technology, the manufacturing industry is evolving and
harvest various of achievements that relate to many stages of production lines.
In the production lines nowadays, the error detection and classification system are
essential for ensuring the reliability and accuracy of systems, processes, and data . Some
noteworthy improvements can be mentioned:
- Ensuring Product Quality: In industrial production, even a small error can cause
significant quality issues and impact the product's reliability, safety, or
performance. By detecting and classifying errors early in the production process,
it is possible to minimize the impact on the final product's quality.
- Reducing Costs: Errors in the production line can lead to waste, rework, and
scrap, which can be expensive to fix. By detecting and classifying errors quickly,
it is possible to reduce the cost of fixing mistakes and minimize the impact on
production schedules.
- Improving Efficiency: Error detection and classification can help identify

1
inefficiencies in the production process, such as bottlenecks, downtime, or
equipment failures. By addressing these issues, it is possible to improve
production efficiency, reduce cycle time, and increase throughput.
- Enhancing Safety: Errors in production lines can pose safety risks for workers,
equipment, or the environment. By detecting and classifying errors, it is possible
to address safety issues and implement measures to prevent accidents and
injuries.
Due to the importance of classification, its algorithms should be mentioned. The
classification algorithms are used to classify or categorize data based on a set of
predefined classes or categories. There are several common types of classification
algorithms but we only mention the one will be used: Neural networks are a set of
algorithms inspired by the structure and function of the human brain. They are used for
complex classification problems and work by creating layers of interconnected nodes that
process the data and produce the classification output.
In summary, error detection and classification system are critical in industrial and
production settings to ensure product quality, reduce costs, improve efficiency, ensure
compliance, and enhance safety.

1.2 Related works

1.2.1 Machine vision-based plastic bottle inspection

The project’s name: M. Kazmi, B. Hafeez, H. R. Khan, and S. A. Qazi, “Machine-


Vision-Based Plastic Bottle Inspection for Quality Assurance, 2022. [1]

Figure 1.2.1.1 The plastic bottle inspection system

2
The authors used image processing method to detect the status of canny edge is
aligned or unaligned.

Figure 1.2.1.2 Input and output of normal and error label’s images
For label detection: Deploy canny edge detector to obtain the strong edges of the
label → full scan to remove any unwanted pixels which may not constitute the edge →
calculate the distance between the minimum and maximum points on two edges of the
label and compare it to the aligned label → If both are parallel, then the distance would
be zero and the sticker is aligned, otherwise it is misaligned label.

Figure 1.2.1.3 Input and output of normal and error label’s images

3
For cap detection: The ROI, composed of a blue cap from the upper portion of the
bottle, is extracted by using hard threshold and binarization → Harris’ corner detector

technique is applied for detecting the corners of the cap → Draw the reference line to join

extreme corner points → Calculated the distance to distinguish whether the cap is seated or
not by comparing the distance with known threshold value of the seated cap.
The study was successful in detecting the cap and label of plastic bottles using the
Harris’ corner detector and Canny edge detection. While the study was able to show that
the algorithm can be used for label detection, the system still needs to be developed more
before it is used in manufacturing lines where products are constantly moving.

1.2.2 Feature extraction algorithm for fill level and cap inspection
in bottling machine

The project’s name: Feature Extraction Algorithm for Fill Level and Cap Inspection
in Bottling Machine - Leila Yazdi, Anton Satria Prabuwono, Ehsan Golkar, 2011. [2]

Figure 1.2.2.1 The system of filling level and cap inspection


This project addresses the issue of inspecting the quality of cap closures and
performing checks for overfilling or underfilling of liquid in bottles. The system is capable
of categorizing three different scenarios for cap conditions and three different scenarios
for the level of liquid in the bottles.

4
Figure 1.2.2.2 The original image and after processing
For liquid level: Take an image → Apply grayscale conversion → Use Canny

algorithm to find edges → Draw a horizontal line at the end of the bottle cap as a reference

line and another line parallel to the line reference to form the Reg1 region → The Reg2

region is the area from the bottom of the bottle cap to the reference water level’s line →
Determine the average line by the distance between the reference line and the bottom edge
of the Reg2 region → Compare the current average with the average of the sample bottle.
For cap detection: take a photo → Apply grayscale conversion --> Use Canny
algorithm to find edges → Draw a horizontal line at the end of the bottle cap as a reference
line and another line parallel to the line reference to form the Reg1 region → The Reg2
region is the area from the bottom of the bottle cap to the top of the cap. → Determine the
average line by the distance between the reference line and the bottom edge of the Reg2
region → Compare the current average with the average of the sample bottle.

Figure 1.2.2.3 The image of a bottle with no cap and unfixed cap

5
Figure 1.2.2.4 The possible errors occur with a bottle

6
1.3 Objectives
The automated visual inspection system comprises two primary subsystems. The
first subsystem, known as the image acquisition subsystem, relies on hardware
components. Its purpose is to convert the optical scene into numerical data, which is then
received by the processing platform. This subsystem consistently includes four key
elements: the camera, lens, lighting system, and processing platform. The second
subsystem, referred to as the image processing subsystem, operates on software principles.
It primarily employs image processing methods that are specifically developed to analyze
the acquired data and generate the final inspection result.
While systems utilizing image processing techniques for fill level monitoring have
been developed and made available in the industry, they tend to be costly and are
predominantly adopted by large manufacturing companies. For research and educational
purposes, we utilize only basic devices that are already in our possession or that can be
acquired.

Figure 1.3.1 The automated visual inspection system


In this topic, we will build a model to classify defective products on a conveyor
belt. The conveyor will transport the newly produced bottles through a camera connected
to a pre-programmed Raspberry Pi4 and through which can output a signal to control the
cylinders installed next to it. conveyor belt, which is used to push the defective products
out and the perfect product will be sent to the end of the runway and packaged.

7
Figure 1.3.2 Simple algorithm model

1.4 Research Methods


The topic "Product Classification Vision System using Raspberry Pi" has been
researched and implemented by many students of universities. At the same time, many
students designed simple models. This model has also been designed and put into use in a
number of factories and is a typical mechatronic product, so in the process of making the
project, the authors applied the following research methods:
Sequential and concurrent methods: combination of sequential and concurrent
design: specifically, the first thing is to study the specific model, then build a model that
contains all the intentions that will be included in the design, thereby giving an overview of
the design. general system and define basic parameters. or alternatively, make assessments
about the system (working capacity of the system, speed of conveyor belt, level of bearing
capacity, limits of mechanical and electrical indicators, and system productivity. ..)
Computer Monitoring Methods: use specialized software to simulate and monitor
how functions work and how product grading occurs.

1.5 Organazation
Chapter 1: We introduce about our initial starting point of our project, any exsisting
project that relate to our current theory-based project, our desired goal, some research
methods and our working plans.
Chapter 2: We discuss about theory and algorithm of our system
Chapter 3: We present the hardware that used in the system
Chapter 4: We present the software that used in the system

8
1.6 Plans
Table 1. The Capstone 2 schedule
The expected Capstone 2 schedule in semester 3
Time Done by Details

2 weeks Quang Designing and making hardware

Testing Raspberry Pi camera


3 weeks Duong Installing OpenCV and finding usable libraries
Preparing the environment for YOLOV4
1 weeks Quang Installing YoloV4 and testing object detection in real-time

2 weeks Duong Writing Python code that suits our desired goal

Quang Import the necessary libraries and begin training, optimizing CNN
2 weeks
Duong models
2 weeks Quang Writing report

Table 2. The Capstone 3 schedule


The expected Capstone 3 schedule in semester 4
Time Done by Details

1 weeks Duong Discussing with the supervisor for guidance and getting to work.

2 weeks Duong Coding for the project

2 weeks Quang Design HMI view


Quang
2 weeks Test the project
Duong
Quang
2 weeks Select the content to prepare and write report
Duong
1 weeks Quang Finishing report

9
Chapter 2: Theoretical Basis
2.1 Pixel

Figure 2.1 A pixel of an image


A pixel is the smallest controllable element of a picture represented on a screen. It’s the
fundamental component that makes up an image or video on a digital display, potentially
numbering in the millions. Each pixel is made up of subpixels that emit red, green, and
blue (RGB) light at varying intensities. These RGB components combine to produce a
wide range of colors that we see on a computer monitor or display.

2.2 RGB
RGB (red, green and blue) refers to a system representing the colors used on a digital display
screen. Red, green and blue can be combined in various proportions to obtain any color in
the visible spectrum.
The RGB model uses 8 bits each -- from 0 to 23 -- for red, green and blue colors. Each color
also has values ranging from 0 to 255. This translates into millions of colors -- 16,777,216
possible colors to be precise.

Figure 2.2 RGB color space with primary and secondary colors
10
2.3 Grayscale
Grayscale refers to a range of monochromatic shades, devoid of any discernible color. On a
display screen, each pixel in a grayscale image represents a certain level of light intensity,
from the lowest (black) to the highest (white). Grayscale carries only information about light
intensity, not color.
There’re some of common methods conversion from RGB to Grayscale but in this project
only use Lightless. [3]
-The lightness method takes the average value of the components with the highest and lowest
min(𝑅, 𝐺, 𝐵) + max (𝑅, 𝐺, 𝐵)
value : Grayscale =
2

Figure 2.3 Original image and after applying lightness grayscale


The lightness method tends to reduce contrast and presents a weakness since one RGB
component is not used. This is definitely a problem because the amount of lightness that our
eye perceives depends on all three basic colors.

11
2.4 Historam equalzation

Figure 2.4.Histogram equalization


In the histogram, most of the pixels have intensity values between 0 to 50. The little
part of the image that the human eye can recognize as white is also black. The goal is to
balance the brightness level over the over the whole brightness scale. By doing so we can
achieve more contrast in the image and that method is called Histogram Equalization.
Note:
- L is the maximum value a pixel can achieve.
- Pr(r) is probability density function (pdf) of the image before equalization.
- Ps(s) is pdf of the image after performing equalization and also an equalized
histogram that is uniformly distributed among all the possible values
The transfer function shown below converts Pr(r) to Ps(s)

Then, differentiation of s with respect to r is

The relation between Pr(r) and Ps(s) can be achieved as

12
So, as can be seen, Ps(s) is normalized distribution. It is able to say that equalization of the
histogram can be achieved by an assumed transfer function.

2.5 Gaussian Blur


The Gaussian blur effect is achieved by applying a blur or smoothing operation to
an image using a Gaussian function. This process helps to reduce noise and unwanted
details in the image while preserving the overall structure. It can be thought of as a fil ter
that focuses on the lower frequency components of the image. This blurring effect is
achieved by convolving the image with a Gaussian function, resulting in a smoother and
less noisy appearance.

Note:
- L is the maximum value a pixel can achieve.
- Pr(r) is probability density function (pdf) of the image before equalization.
Ps(s) is pdf of the image after performing equalization and also an equalized

Figure 2.5 Normal image and blur image

13
2.6 Thresholding
Gray-level thresholding is an efficient and widely used method for image segmentation. It
is especially powerful in combination with preprocessing steps such as background
illumination correction and top hat filtering, where the object and background classes are
well separated in gray-level.
Thresholding is the process of converting an input image into an output binary image
that is segmented. [4]

Note:

- T is the threshold
- g(i, j) = 1 for image elements of objects
- g(i, j) = 0 for image elements of the background (or vice versa).

Figure 2.6 Original image and after applying thresholding

2.7 Boundary extraction


Boundary extraction is a technique used in image processing to separate an object from
its background and is often used for feature extraction, object recognition, and
segmentation purposes. [5]
First, eroding set A by using a structuring element B. This operation causes the
boundaries of A to shrink inward. Then, taking the set difference between A and its
eroded version. This results in obtaining the boundary of A, denoted as β(A).
𝛽(𝐴) = 𝐴 − (𝐴 ϴ B)

14
Figure 2.7 Original image and after applying boundary extraction
2.8 Closing
Closing is a morphological operation that involves combining erosion and dilation to
create a powerful operator. When applied, it brings objects closer together, resulting in a
more cohesive image. Closing has a smoothing effect on contour areas. However, it also
has additional benefits such as merging small gaps, repairing minor breaks, eliminating
small holes, and filling in gaps within objects. [6]
A● B = (A ⊕ B) Θ B
A•B is a dilatation followed by erosion that occurs when an image 'A' is closed by a structural
element 'B.'

Figure 2.8 Image before and after applying closing

15
2.9 The algorithm of system

Figure 2.9 The algorithm of the system

16
Chapter 3: Hardware Implementation
3.1 Hardware components

3.1.1 Raspberry pi 4B

Figure 3.1.1 Raspberry Pi 4B


Raspberry Pi is a small single board computer and an embedded system. By
connecting peripherals like Keyboard, mouse, display to the Raspberry Pi, it will act as a
mini personal computer.

Raspberry Pi is popularly used for real time Image or Video Processing, IoT based
applications and Robotics applications.

Figure 3.1.1 Raspberry Pi’s GPIO

17
Raspberry Pi is more than computer as it provides access to the on-chip hardware i.e.
GPIOs for developing an application. By accessing GPIO, we can connect devices like LED,
motors, sensors, etc… and can control them too.

It has ARM based Broadcom Processor SoC along with on-chip GPU (Graphics
Processing Unit).

The CPU speed of Raspberry Pi varies from 700 MHz to 1.2 GHz. Also, it has on-
board SDRAM that ranges from 256 MB to 1 GB.

Raspberry Pi also provides on-chip SPI, I2C, I2S and UART modules.

3.1.2 PLC S7-1200


S7-1200 was launched in 2009 to gradually replace the S7-200. Compared to the
S7-200, the S7-1200 has more outstanding features.
The S7-1200 has a compact design, low cost, and a powerful command set to help
perfect solutions for applications used with the S7-1200
The S7-1200 provides a PROFINET port, supporting Ethernet and TCP/IP
standards.

Figure 3.1.2.1 PLC S7-1200 and modules


Components of PLC S7-1200 include:
– 3 compact controllers with assortment in different versions like wide range AC,
RELAY or DC control
– 2 analog and digital input/output expansion circuits directly on the CPU reduce
product costs
– 13 different digital and analog signal modules included (SM and SB modules)
– 2 RS232/RS485 communication modules to communicate via PTP connection
– Additional 4 Ethernet ports
– Stable PS 1207 power supply module, 115/230 VAC current and 24 VDC voltage
18
Figure 3.1.2.1 Structure of Siemen CPU S7-1200 controller
+ Built-in Profinet (Ethernet) communication port:
Used to connect computers, HMI screens or PLC-PLC communications
Use to connect to other devices that support the open Ethernet standard
RJ45 connector with automatic cross-over switching feature
Transmission speed 10/100 Mbits/s
Supports 16 ethernet connections
TCP/IP, ISO on TCP, and S7 protocol
+ Measurement, position control, and process control features:
6 high speed counters for counting and measurement applications, including 3 100kHz
counters and 3 30kHz counters
2 100kHz PTO outputs for speed and position control of stepper motors or servo drives
PWM pulse width control output, motor speed control, valve position, or temperature
control... 16 PID controllers with auto-tune functionality
+ Flexible design:
Expand input/output signals with an expansion signal board (signal board), mounted
directly in front of the CPU, helping to expand input/output signals without changing the
size of the control system.
Each CPU can connect up to 8 input/output signal expansion modules.
0-10V analog input is integrated on the CPU
3 communication modules can be connected to the CPU to expand communication
capabilities, for example RS232 or RS485 modules
SIMATIC memory card, used when needing more memory for the CPU, copying

19
application programs or updating firmware
Diagnose errors online/offline

3.1.3 Cylinder 2ty

Figure 3.1.3 Cylinder 2 ty TN


Pneumatic Cylinder 2 ty TN is one of the famous products of the company. Cylinder is a
device used to convert the energy of compressed air supplied into kinetic energy to perform
movements such as pushing, pulling, pressing, compressing. Depending on the design, the
parameter is the impact force of the cylinder more or less.
Structure and working principle: the cylinder is composed of many parts and
components: barrel, ty, seal, piston, cylinder cover, impact head, air supply port, compressed
air outlet. Produced on a modern chain, carefully checked for quality, with stamps,
trademarks and full origin. Good materials: steel, stainless steel... so it can resist oxidation,
withstand light impacts and work in many high-temperature environments.
The structure of Cylinder 2 ty TN is consisted a flat rectangular block, air inlet
port, air outlet port, cylinder cover and 2 shafts.
To help control the operation of the cylinder as desired, people often attach the
cylinder with an air valve. valve type 3/2 or valve 5/2. When supplied with compressed air,
the gas will enter the inside of the cylinder including the piston. Then the force will be
transmitted to make the piston move in and out. At the end of the stroke, the cylinder
discharges compressed air and begins a new compression-push cycle. These cycles will
repeat, continuously.

20
3.1.4 Pneumatic solenoid valve SMC

Figure 3.1.4 Pneumatic solenoid valve SMC


SMC gas control valve will perform the function of providing air flow for operating
equipment, dividing compressed air flow according to user requirements. Control valves
have two types: gas control valves and valves. electromagnetic. With solenoid valves,
customers prioritize the selection by the quick opening and closing of the door, the transition
from normally closed to at or vice versa happens almost simultaneously with the power on
and off.
The solenoid valve has many types of solenoid coils from 12V to 24V, 110V to 220V,
so it is easier for customers to choose. With pneumatic control valves, the operation of the
valve relies on the pressure of the compressed air flow to close and open the valve door.
However, the gas source must be strong enough and continuously supplied for the valve to
operate. Some types of SMC air valves such as: VXD solenoid valve, VF 1000 solenoid
valve, VF 3000 solenoid valve, VF 5000 solenoid valve,….
Advantages of SMC valve: compact design, easy to install with many different thread
sizes, valve has high aesthetics, made of good materials, so it is very durable, resistant to
high temperature and pressure, can do Continuous work, heavy work mode.
However, no matter how good the equipment is, if customers do not learn how to use
it carefully, do not maintain it well, and clean it periodically, it will not bring about high
efficiency and longevity as expected.

21
3.1.5 Conveyor

Figure 3.1.5 Conveyor


A conveyor is a mechanism or machine that can transport a single load (cartons,
boxes, bags, ...) or bulk materials (soil, powder, food,...) from a point A to point B. help save
labor, labor, time, increase labor productivity. Therefore, conveyor belts are one of the
important parts in the production and assembly lines of enterprises and factories.
Contributing to creating a dynamic, scientific production environment and freeing labor,
bringing high economic efficiency.
Conveyor applications: There are many types of conveyor belts that are applied in
different working conditions and properties, used to transport goods, pack products, classify
products, easily integrate with testing machines. automatic products, automatic strapping
machines,... used in many industries of manufacturing, assembling components, electronic
equipment, assembling cars, motorcycles, electric bicycles, in the food production, medical
industry, pharmaceuticals, apparel, footwear.

22
3.1.6 Conveyor’s motor

Figure 3.1.6 DC Motor


A DC motor, also known as a DC motor (DC stands for "Direct Current Motors") is
a direct-drive motor that has two wires (power wire and ground wire). DC motor is a DC
motor with continuous rotation.
When you supply power, the 12V DC motor will start to rotate, converting electrical
energy into mechanical energy. Most DC motors will rotate at a very high RPM (revolutions
per minute), and DC motors are used to cool computer fans, or control wheel rotation. To
control the rotational speed of a DC motor, pulse width modulation (symbolized as PWM)
is used, which is a technique for controlling the operating speed by turning on and off
electrical pulses. The speed-to-time ratio of the device is controlled by switching on and off
a specified degree of engine revolution base.
Table 3. 12V DC motor specification
Description Specifications
Input Voltage 24 VDC
Current Max 6A
Power 50W
Speed 3500 RPM (12V)
Length 5cm
Width 3.6cm
Shaft size and length 3mm and 17mm

23
3.2 Technology analysis

Figure 3.2 Actual model


A camera detects the fault and sends a signal back to the PLC running the designed
mechanisms
Conveyor:
+ Aluminum frame.
+ The conveyor surface is made of PVC wrapped around 2 rollers. The product is placed
on the conveyor belt.
+ Two rollers placed at the 2 ends of the aluminum frame.
• Product catcher
• Three sensors: Sensor1, Sensor2, Sensor3
The sensors are optical sensors.
+ The sensor is mounted next to the conveyor
- Sensor1: The sensor detects that the product is faulty in the bottle cap
- Sensor2: It is a sensor that recognizes the error of the bottle label part
- Sensor3: The sensor detects the product looic part of the water level
Conveyor
- Sensor4: The sensor detects the object in the first position on the conveyor
• Three product thrust cylinders: Cylinder 1, Cylinder 2, Cylinder 3
The cylinders are mounted just below the optical sensors.

24
- Cylinder 1 pushes the product bottle cap error
- Cylinder 2 pushes product with sticker error
- Cylinder 3 pushes the product water level error in the bottle
• There are 4 types of defective products: bottle cap error, sticker error, water level error
+ High product with bottle cap error
+ Average product with sticker error
+ Low product with water level error
• Intermediate relay used to switch the contacts and protect the outputs of the PLC
• Computer controlled and monitored remotely by PLC software S7-300

3.3 Hardware wiring and model

Figure 3.3.1 Wiring diagram

Figure 3.3.2 Hardware design


25
Chapter 4: Software
4.1 YOLOv4 Tiny

Figure 4.1 YOLO


YOLOv4-tiny is proposed based on YOLOv4 to simple the network structure and reduce
parameters, which makes it be suitable for developing on the mobile and embedded
devices. To improve the real-time of object detection, a fast object detection method is
proposed based on YOLOv4-tiny. [7]

4.1.1 Yolov4-tiny network structure


Yolov4-tiny method is designed based on Yolov4 method to make it have faster speed of
object detection on embedded systems or mobile devices.
The Yolov4-tiny method uses CSPDarknet53-tiny network as backbone network which uses
the CSPBlock module in cross stage partial network.
The CSPBlock module is able to:
- Make the gradient flow can propagate in two different network paths to increase the
correlation difference of gradient information.
- Enhance the learning ability of convolution network comparing with ResBlock
module.

Although the CPSBlock increases computation by 10%-20% but improves the accuracy.
Yolov4-tiny method uses the LeakyReLU function as activation function in CSPDarknet53-
tiny network to make the computation process more simple.

The LeakyReLU function: where ai (1, +) , is a


constant parameters.
Yolov4-tiny method uses feature pyramid network to extract feature maps with different
scales to increase object detection speed. At the same time, the Yolov4-tiny uses two
different scales feature maps that are 13×13 and 26×26 to predict the detection results.

26
e

Figure 4.1.1 The architecture of YOLOv4-Tiny network


4.1.2 Yolov4-tiny prediction process
YOLO uses a different approach by applying a single neural network to a full image. This
network divides the input into an S×S grid and performs detection in each cell
Each cell predicts bounding boxes along with the confidence of those boxes. These
confidence scores reflect how confident the model is about whether the box contains an
object or not. If it is confident, the confidence score tells how accurate the IOU of the
ground truth (truth) and the predictions (pred) is
The confidence score of bounding box can be obtained as follows:

𝑗
- 𝐶𝑖 is the confidence score of the j th bounding box in the i th grid.
- Pi,j is merely a function of the object

- If the object is in the j th box of the i th grid, Pi,j = 1, otherwise Pi,j = .1


𝑡𝑟𝑢𝑡ℎ
- 𝐼𝑂𝑈𝑝𝑟𝑒𝑑 presents the intersection over union between the predicted box and ground
truth box.

4.1.3 Yolov4-tiny loss function


The loss function is considered the regression error of bounding coordinates, the
confidence error of bounding box, and the classification error of object category.
The loss function of Yolov4-tiny is:
loss = lossconfidence + lossclassification + lossregression
27
The confidence loss function is:

- S2 is the number of grid in input image.


- B is the number of bounding box in a grid.
𝑜𝑏𝑗
- 𝑊𝑖𝑗 is merely a function of the object.
If the j th bounding box of the i th grid is responsible for detecting the current object,
𝑜𝑏𝑗 𝑜𝑏𝑗
𝑊𝑖𝑗 = 1 ,otherwise 𝑊𝑖𝑗 = 0

-
𝑗 ̂𝑗 are the confidence score of predicted box and confidence score of
The 𝐶𝑖 and 𝐶𝑖

truth box.
- 𝑛𝑜𝑜𝑏𝑗 is a weight parameter.

The classification loss function is:

The 𝑝𝑖 (𝑐) and 𝑝̂


𝑗 𝑗
- 𝑖 (𝑐) re predict probability and truth probability to which the object

belongs to c classification in the j th bounding box of the i th grid.

The bounding box regression loss function is:

- IOU is intersection over union between the boxes that are predicted bounding box
and truth bounding box
- 𝑤 𝑔𝑡 and ℎ 𝑔𝑡 are the truth width and height of the bounding box
- w and h are the predicted width and height of the bounding box
- 𝑝2 (𝑏, 𝑏 𝑔𝑡 ) denotes the Euclidean distance between the center points of predicted
bounding box and truth bounding box
- c is the minimum diagonal distance of box that can contain the predicted bounding
box and truth bounding box.

28
4.2 Image source use for training

Figure 4.2.1 Label the water level when there is no label on the bottle and bottle cap

Figure 4.2.2 Label the bottle label and bottle cap when the water level is not enough

29
Figure 4.2.3 Label the label when there is no liquid and bottle cap

Figure 4.2.4 Label all the targets cap, label and liquid

Using Tool Label Image to label each objects and target each error based on cap, label and
liquid.

30
4.3 Tkinter

Figure 4.3 Tkinter


Tkinter is the standard GUI library for Python. Python when combined with Tkinter
provides a fast and easy way to create GUI applications. Tkinter provides a powerful
object-oriented interface to the Tk GUI toolkit.
Tkinter provides various controls, such as buttons, labels and text boxes used in a GUI
application. These controls are commonly called widgets.

31
4.4 Monitor and control with Tkinter

Figure 4.4 Monitor And Control Screen


Monitor and control screen on Tkinter includes 2 main modes: Automatic and Manual:
- When the Automatic button is pressed, the automatic mode control panel of
the model is active and the conveyor will run and the camera will automatically detect the
fault on the product.
The START and STOP buttons have the effect of starting the cycle and ending the model
automatically.
- The camera is displayed on the control screen of the Tkinter. Identify and identify defects
on the product.
- Three lights with the name of each fault on the bottle and the function showing the 3 faults
that the project has identified.
- Four lights capable of displaying the operating status of the conveyor and 3 cylinders

32
4.5 TCP/IP model, Link between Raspberry Pi 4 and PLC S7 1200 with
Snap7
a) Theoretical structure of TCP/IP communication standard
The idea to form the TCP/IP model was originated from the Internet Protocol Suite
in DARPA work in 1970. Over countless years of research and development by two
engineers Robert E. Kahn and Vinton Cerf with the support from many research groups. In
early 1978, the TCP/IP protocol was stabilized with the currently used standard protocol of
the Internet, the TCP/IP Version 4 model.
TCP/IP (Transmission Control Protocol/Internet Protocol), is a set of information
exchange protocols used to transmit and connect devices in the Internet. TCP/IP was
developed to make the network more reliable with automatic recovery.

Figure 4.5.1 The TCP/IP stack


As the name suggests, TCP/IP is a combination of two protocols. The Internet
Protocol (IP) enables packets to be dispatched to a specific destination by appending routing
information to the packets, ensuring they reach their intended destination. The Transport
Protocol (TCP), on the other hand, is responsible for verifying and ensuring the integrity of
each packet as it traverses through various stations. If TCP detects any corruption in a packet
during this process, it sends a signal requesting the system to retransmit a fresh packet. The
roles of each layer in the TCP/IP model will provide further clarity on this operational
33
process.
By using TCP/IP Stack of Snap7 Python library[14], the communication and
exchange data between PLC S7-1200 and Raspberry Pi model become possible.
b) Results and how to apply TCP/IP communication standards in the project
− Step 1:
Download snap7 library and install to Raspberry Pi 4.
Using snap7 library for Raspberry Pi 4 can read and write input and output of PLC S7
1200.

Python code after download and install snap7:


import cv2
import snap7
from snap7.util import *
from snap7.types import *
from tkinter import *
import tkinter as tkinter
from tkinter import messagebox
from PIL import ImageTk, Image
import numpy as np
import json
import os
import time
import traceback

− Step 2:
Use commands in python code to read and write data on PLC pins that need to be
processed and used.

Python code when write data from Raspberry to PLC:


def WriteMemory(plc, byte, bit, datatype, value):
result = plc.read_area(Areas['MK'], 0, byte, datatype)
if datatype == S7WLBit:
set_bool(result, 0, bit, value)
elif datatype == S7WLByte or datatype == S7WLWord:
set_int(result, 0, value)
elif datatype == S7WLReal:
set_real(result, 0, value)
elif datatype == S7WLDWord:
set_dword(result, 0, value)
plc.write_area(Areas['MK'], 0, byte, result)

def Manual_Command(self):
WriteMemory(plc, 10, 4, S7WLBit, 1)
WriteMemory(plc, 15, 0, S7WLBit, 0)
self.Auto_Button.configure(bg = "silver")
34
self.Manual_Button.configure(bg = "green")

def Auto_Command(self):
WriteMemory(plc, 10, 4, S7WLBit, 0)
WriteMemory(plc, 15, 0, S7WLBit, 1)
self.Auto_Button.configure(bg = "green")
self.Manual_Button.configure(bg = "silver")

def Start_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 1)
WriteMemory(plc, 12, 1, S7WLBit, 0)
WriteMemory(plc, 10, 0, S7WLBit, 0)
WriteMemory(plc, 10, 1, S7WLBit, 0)
WriteMemory(plc, 10, 2, S7WLBit, 0)
WriteMemory(plc, 10, 3, S7WLBit, 0)
self.Start_Button.configure(bg = "green")

def Stop_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 0)
WriteMemory(plc, 12, 1, S7WLBit, 1)
WriteMemory(plc, 10, 5, S7WLBit, 0)
WriteMemory(plc, 10, 6, S7WLBit, 0)
WriteMemory(plc, 10, 7, S7WLBit, 0)
WriteMemory(plc, 11, 0, S7WLBit, 0)
self.Start_Button.configure(bg = "silver")

Python code when read data from PLC to Raspberry:


def ReadMemory(plc, byte, bit, datatype):
result = plc.read_area(Areas['MK'], 0, byte, datatype)
if datatype == S7WLBit:
return get_bool(result, 0, bit)
elif datatype == S7WLByte or datatype == S7WLWord:
return get_int(result, 0)
elif datatype == S7WLReal:
return get_real(result, 0)
elif datatype == S7WLDWord:
return get_dword(result, 0)
else:
return None

if ReadMemory(plc, 11, 6, S7WLBit) == True:


if time.time() - self.Timer > 3:
WriteMemory(plc, 11, 6, S7WLBit, 0)

35
PLC S7 1200 code to control hardware of project:

36
37
Figure 4.5.2 The PLC code

38
Chapter 5: Conclusion
5.1 Result and evaluation
Table 4. Our system evaluation
Method Average accuracy Processing time
1 Our proposed algorithm 90.7% 360 (ms)
2 YOLOv7-Tiny 97.9% 960 (ms)
YOLOv7 99.9% 4200 (ms)

Figure 5.1.1 The hardware

39
40
Figure 5.1.2 The result for each error

41
5.2 Summary and Conclusion
In short, this project helps us to planning and assembling the hardware of our
model.
We also learn more about the supporting software for the project such as YOLOv4
Tiny, VNC Viewer, these software helps us to monitor and have a fast object detection
method.
Learning Python language for writing test code for our project.
Learning more about the process of checking bottles in industry.

5.3 Future work


We will try to improve the performance of the system so that the procession time
could be faster.
This will involve constructing the mechanical components needed for the
classification system and using the Python programming language along with the
OpenCV library to implement the system on a Raspberry Pi.

42
BIBLIOGRAPHY
[1] M. Kazmi, B. Hafeez, H. R. Khan, and S. A. Qazi, “Machine-Vision-Based Plastic
Bottle Inspection for Quality Assurance †,” Eng. Proc., vol. 20, no. 1, pp. 1–5,
2022, doi: 10.3390/engproc2022020009.
[2] L. Yazdi, A. S. Prabuwono, and E. Golkar, “Feature extraction algorithm for fill
level and cap inspection in bottling machine,” Proc. 2011 Int. Conf. Pattern Anal.
Intell. Robot. ICPAIR 2011, vol. 1, no. April, pp. 47–52, 2011, doi:
10.1109/ICPAIR.2011.5976910.
[3] C. Kanan and G. W. Cottrell, “Color-to-grayscale: Does the method matter in image
recognition?,” PLoS One, vol. 7, no. 1, 2012, doi: 10.1371/journal.pone.0029740.
[4] V. H. and R. B. M. Sonka, Image processing, analysis, and machine vision.
Cengage Learning. 2014.
[5] S. S. Raseli and J. M. Ali, “Boundary extraction of 2D image,” J. Basic Appl. Sci.
Res., vol. 2, no. 5, pp. 5374–5376, 2012.
[6] Sunil Bhutada, Nakerakanti Yashwanth, Puppala Dheeraj, and Kethavath Shekar,
“Opening and closing in morphological image processing,” World J. Adv. Res. Rev.,
vol. 14, no. 3, pp. 687–695, 2022, doi: 10.30574/wjarr.2022.14.3.0576.
[7] Z. Jiang, L. Zhao, L. I. Shuaiyang, and J. I. A. Yanfei, “Real-Time Object Detection
Method For Embedded Devices,” arXiv, vol. 3, pp. 1–11, 2020, [Online]. Available:
https://arxiv.org/pdf/2011.04244v2
[8] Winder, S., & Roberts, A. (2018). Raspberry Pi Assembly Language RASPBIAN
Beginners: Hands On Guide. CreateSpace Independent Publishing Platform.
[9] Karpathy, A. (2016). Convolutional neural networks for visual recognition.
Stanford University.
[10] OpenCV Documentation, https://docs.opencv.org/
[11] YoloV4 on Google Colab, https://github.com/roboflow-ai/yolov4-google-colab
[12] Google Colaboratory, https://research.google.com/colaboratory/
[13] TCP/IP Model, https://www.totolink.vn
[14] Python-snap7 Documentation, https://python-snap7.readthedocs.io/en/latest/

43
APENDIX
Code of the project:
import cv2
import snap7
from snap7.util import *
from snap7.types import *
from tkinter import *
import tkinter as tkinter
from tkinter import messagebox
from PIL import ImageTk, Image
import numpy as np
import json
import os
import time
import traceback
from base64 import b16encode
from datetime import datetime, timedelta
from tkinter import ttk
import openpyxl
from openpyxl import Workbook
camera=cv2.VideoCapture(0)
plc = snap7.client.Client()
IP = '169.254.142.100'
RACK = 0
SLOT = 1
plc.connect(IP, RACK, SLOT)
def Current_Time():
return datetime.now().strftime("%d/%m/%Y %H:%M:%S")
def ReadMemory(plc, byte, bit, datatype):
result = plc.read_area(Areas['MK'], 0, byte, datatype)
if datatype == S7WLBit:
return get_bool(result, 0, bit)
elif datatype == S7WLByte or datatype == S7WLWord:

44
return get_int(result, 0)
elif datatype == S7WLReal:
return get_real(result, 0)
elif datatype == S7WLDWord:
return get_dword(result, 0)
else:
return None

def WriteMemory(plc, byte, bit, datatype, value):


result = plc.read_area(Areas['MK'], 0, byte, datatype)
if datatype == S7WLBit:
set_bool(result, 0, bit, value)
elif datatype == S7WLByte or datatype == S7WLWord:
set_int(result, 0, value)
elif datatype == S7WLReal:
set_real(result, 0, value)
elif datatype == S7WLDWord:
set_dword(result, 0, value)
plc.write_area(Areas['MK'], 0, byte, result)
def rgb_color(rgb):
return (b'#' + b16encode(bytes(rgb)))
def font(size):
return ('Times New Roman', size, 'bold')
class myApp(object):
def __init__(self, parent):
self.root = parent
self.root.title('My Application')
self.root.geometry('1080x720')
self.root.configure(bg='white')
self.root.attributes("-fullscreen", True)
self.Frame = Frame(self.root, height = 720, width = 720)
self.Frame.place(x = 180, y =0)
self.BG_photo = ImageTk.PhotoImage(Image.open("./Image/BG.jpg"))
self.BG_Label = Label(self.Frame, image = self.BG_photo)
45
self.BG_Label.place(x=0, y=0)
self.Account_Variable = StringVar()
self.Account = tkinter.Entry(self.Frame, justify = 'center', width = 20, textvariable =
self.Account_Variable, font=font(25))
self.Account.place(x=200, y = 500)
self.Password_Variable = StringVar()
self.Password = tkinter.Entry(self.Frame, show = "*", justify = 'center', width = 20,
textvariable = self.Password_Variable, font=font(25))
self.Password.place(x=200, y = 570)
self.Enter_Button = tkinter.Button(self.Frame, height = 1, command =
self.Enter_Command, bg = 'silver', text = 'Đăng Nhập', font=font(21))
self.Enter_Button.place(x=290, y=640)

def Enter_Command(self):
if messagebox.askquestion('Cảnh báo', 'Bạn muốn đăng nhập vào tài khoản?', icon=
'warning', parent = self.root) == "yes":
if len(self.Password_Variable.get()) > 0 and len(self.Account_Variable.get()) > 0:
if self.Password_Variable.get() == "admin123" and self.Account_Variable.get()
== "admin":
self.Frame.place_forget()
self.myStart()
else:
result = messagebox.showinfo('Thông báo', 'Tài khoản hoặc mật khẩu không
chính xác.', icon= 'warning', parent = self.root)
else:
result = messagebox.showinfo('Thông báo', 'Vui lòng nhập đầy đủ tài khoản và
mật khẩu.', icon= 'warning', parent = self.root)
else:
pass
def myStart(self):
self.Start_Frame = Frame(self.root, height = 720, width = 1080)
self.Start_Frame.place(x = 0, y =0)
self.BG_photo = ImageTk.PhotoImage(Image.open("./Image/background.png"))
self.BG_Label = Label(self.Start_Frame, image = self.BG_photo)
46
self.BG_Label.place(x=0, y=0)
self.ON_Photo = ImageTk.PhotoImage(Image.open("./Image/On.png"))
self.OFF_Photo = ImageTk.PhotoImage(Image.open("./Image/Off.png"))
self.Fail_Photo = ImageTk.PhotoImage(Image.open("./Image/Fail.png"))
self.Good_Photo = ImageTk.PhotoImage(Image.open("./Image/Good.png"))

self.Lit_Label = Label(self.Start_Frame, image = self.Fail_Photo, borderwidth=0,


relief='flat', highlightthickness=0,)
self.Lit_Label.place(x=450, y =660)
self.Coca_Label = Label(self.Start_Frame, image = self.Fail_Photo, borderwidth=0,
relief='flat', highlightthickness=0,)
self.Coca_Label.place(x=700, y =660)
self.V_Label = Label(self.Start_Frame, image = self.Fail_Photo, borderwidth=0,
relief='flat', highlightthickness=0,)
self.V_Label.place(x=950, y =660)

self.Start_Button = tkinter.Button(self.Start_Frame, command = self.Start_Command,


width = 10,height = 2, bg = 'silver', text = 'START', font=font(15))
self.Start_Button.place(x=380, y=580)
self.Stop_Button = tkinter.Button(self.Start_Frame, command = self.Stop_Command,
width = 10, height = 2, bg = 'silver', text = 'STOP', font=font(15))
self.Stop_Button.place(x=530, y=580)
self.Manual_Button = tkinter.Button(self.Start_Frame, command =
self.Manual_Command,width = 10, height = 2, bg = 'silver', text = 'MANUAL',
font=font(15))
self.Manual_Button.place(x=680, y=580)
self.Auto_Button = tkinter.Button(self.Start_Frame,command =
self.Auto_Command,width = 10, height = 2, bg = 'silver', text = 'AUTO', font=font(15))
self.Auto_Button.place(x = 830, y=580)
self.my_Image1 = Label(self.Start_Frame)
self.my_Image1.place(x = 12, y=175)
self.Timer = time.time()
self.Wrapper1 = Frame(self.Start_Frame, width = 480, heigh = 240)
self.Wrapper1.place(x=450,y=220)
47
self.Treeview_List = ttk.Treeview(self.Wrapper1, columns=("1"))
self.style = ttk.Style(self.Treeview_List)
self.style.configure("Treeview", rowheight = 24, height = 10)
self.Treeview_List.pack(side ='left')
self.Treeview_List.heading("#0", text = "Thời gian")
self.Treeview_List.column("#0", minwidth = 240, width = 240, anchor = "center")
self.Treeview_List.heading("#1", text="Phân loại lỗi")
self.Treeview_List.column("#1", minwidth=240, width=240, anchor="center")
self.Treeview_List.bind('<Motion>', 'break')
self.From_Date_Entry_Variable = StringVar()
self.From_Date_Entry = Entry(self.Start_Frame,justify = "center",
textvariable=self.From_Date_Entry_Variable,
font=font(10), width=18).place(x=450, y=500)
self.To_Date_Entry_Variable = StringVar()
self.To_Date_Entry = Entry(self.Start_Frame, justify = "center",
textvariable=self.To_Date_Entry_Variable,
font=font(10), width=18).place(x=600, y=500)
self.Options_List = ["Tất cả","Nắp", "Thể tích", "Nhãn"]
self.Option_Value = StringVar()
self.Option_Value.set("Tất cả")
self.Option_Menu = OptionMenu(self.Start_Frame, self.Option_Value,
*self.Options_List)
self.Option_Menu.place(x=750, y=495)
self.Search_Button = tkinter.Button(self.Start_Frame, command =
self.Search_Command, height = 1, bg = 'silver', text = 'tìm kiếm', font=font(13))
self.Search_Button.place(x=850, y=495)
self.Workbook = openpyxl.load_workbook("/home/pi/Desktop/Project/Data.xlsx")
self.Sheet = self.Workbook.active
self.Run()
def Search_Command(self):
try:
self.From_Date = datetime.strptime(self.From_Date_Entry_Variable.get(),
'%d/%m/%y %H:%M:%S')
self.To_Date = datetime.strptime(self.To_Date_Entry_Variable.get(), '%d/%m/%y
48
%H:%M:%S')
for i in self.Treeview_List.get_children():
self.Treeview_List.delete(i)
self.Workbook = openpyxl.load_workbook("/home/pi/Desktop/Project/Data.xlsx")
self.Sheet = self.Workbook.active
if self.Option_Value.get() == "Tất cả":
for i in range (2, 99):
if self.Sheet.cell(row = i, column = 1).value != None:
self.Current_Date = datetime.strptime(self.Sheet.cell(row = i, column =
1).value, '%d/%m/%Y %H:%M:%S')
print((self.Current_Date - self.From_Date).days, (self.To_Date -
self.Current_Date).days)
if (self.Current_Date - self.From_Date).days >= 0 and (self.To_Date -
self.Current_Date).days >= 0:
self.Treeview_List.insert("","end", value = (str(self.Sheet.cell(row = i,
column = 2).value),), text=self.Current_Date)
else:
break
elif self.Option_Value.get() == "Nắp":
for i in range (2, 99):
if self.Sheet.cell(row = i, column = 1).value != None:
self.Current_Date = datetime.strptime(self.Sheet.cell(row = i, column =
1).value, '%d/%m/%Y %H:%M:%S')
print((self.Current_Date - self.From_Date).days, (self.To_Date -
self.Current_Date).days)
if (self.Current_Date - self.From_Date).days >= 0 and (self.To_Date -
self.Current_Date).days >= 0:
if ("Nắp" in self.Sheet.cell(row = i, column = 2).value) == True:
self.Treeview_List.insert("","end", value = (str(self.Sheet.cell(row = i,
column = 2).value),), text=self.Current_Date)
else:
break
elif self.Option_Value.get() == "Thể tích":
for i in range (2, 99):
49
if self.Sheet.cell(row = i, column = 1).value != None:
self.Current_Date = datetime.strptime(self.Sheet.cell(row = i, column =
1).value, '%d/%m/%Y %H:%M:%S')
print((self.Current_Date - self.From_Date).days, (self.To_Date -
self.Current_Date).days)
if (self.Current_Date - self.From_Date).days >= 0 and (self.To_Date -
self.Current_Date).days >= 0:
if ("Thể tích" in self.Sheet.cell(row = i, column = 2).value) == True:
self.Treeview_List.insert("","end", value = (str(self.Sheet.cell(row = i,
column = 2).value),), text=self.Current_Date)
else:
break
elif self.Option_Value.get() == "Nhãn":
for i in range (2, 99):
if self.Sheet.cell(row = i, column = 1).value != None:
self.Current_Date = datetime.strptime(self.Sheet.cell(row = i, column =
1).value, '%d/%m/%Y %H:%M:%S')
print("Nhãn" in self.Sheet.cell(row = i, column = 2).value)
if (self.Current_Date - self.From_Date).days >= 0 and (self.To_Date -
self.Current_Date).days >= 0:
if ("Nhãn" in self.Sheet.cell(row = i, column = 2).value) == True:
self.Treeview_List.insert("","end", value = (str(self.Sheet.cell(row = i,
column = 2).value),), text=self.Current_Date)
else:
break
except:
print(traceback.print_exc())
def Manual_Command(self):
WriteMemory(plc, 10, 4, S7WLBit, 1)
WriteMemory(plc, 15, 0, S7WLBit, 0)
self.Auto_Button.configure(bg = "silver")
self.Manual_Button.configure(bg = "green")

def Auto_Command(self):
50
WriteMemory(plc, 10, 4, S7WLBit, 0)
WriteMemory(plc, 15, 0, S7WLBit, 1)
self.Auto_Button.configure(bg = "green")
self.Manual_Button.configure(bg = "silver")

def Start_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 1)
WriteMemory(plc, 12, 1, S7WLBit, 0)
WriteMemory(plc, 10, 0, S7WLBit, 0)
WriteMemory(plc, 10, 1, S7WLBit, 0)
WriteMemory(plc, 10, 2, S7WLBit, 0)
WriteMemory(plc, 10, 3, S7WLBit, 0)
self.Start_Button.configure(bg = "green")

def Stop_Command(self):
WriteMemory(plc, 12, 0, S7WLBit, 0)
WriteMemory(plc, 12, 1, S7WLBit, 1)
WriteMemory(plc, 10, 5, S7WLBit, 0)
WriteMemory(plc, 10, 6, S7WLBit, 0)
WriteMemory(plc, 10, 7, S7WLBit, 0)
WriteMemory(plc, 11, 0, S7WLBit, 0)
self.Start_Button.configure(bg = "silver")

def Run(self):
try:
self.Contours_X = []
self.Contours_Y = []
self.Contours_W = []
self.Contours = []
self.Liquid_State = 0
self.Lid_State = 0
self.Label_State = 0
_, self.frame = camera.read()
self.frame = cv2.cvtColor(self.frame , cv2.COLOR_BGR2RGB)
51
self.frame = cv2.resize(self.frame, (480,320))
self.frame = cv2.rotate(self.frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
img_hsv=cv2.cvtColor(self.frame, cv2.COLOR_BGR2HSV)
lower = np.array([0,50,50])
upper = np.array([179,255,200])
mask = cv2.inRange(img_hsv, lower, upper)
result = cv2.bitwise_and(self.frame, self.frame, mask=mask)
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)

if len(contours) > 0:
for i in range (0,len(contours)):
x, y, w, h = cv2.boundingRect(contours[i])
if h < 50 or w < 50:
mask[y:y+h, x:x+w] = 0
else:
cv2.rectangle(self.frame,(x,y),(x+w,y+h),(4, 200, 4),2)
self.Contours_X.append((x))
self.Contours_Y.append((y))
self.Contours_W.append((w))
self.Contours.append((x,y,w,h))
for i in range (0, len(self.Contours)):
if self.Contours[i][2] < 100 and self.Contours[i][1] < 250:
cv2.putText(self.frame, "Lid", (self.Contours[i][0],self.Contours[i][1]) ,
cv2.FONT_HERSHEY_SIMPLEX,0.75, (255,0,0), 2)
self.Lid_State = 1
else:
cv2.putText(self.frame, "Label", (self.Contours[i][0],self.Contours[i][1]) ,
cv2.FONT_HERSHEY_SIMPLEX,0.75, (255,0,0), 2)
self.Label_State = 1
lower = np.array([0,0,0])
upper = np.array([150,190,75])
mask = cv2.inRange(img_hsv, lower, upper)
result = cv2.bitwise_and(self.frame, self.frame, mask=mask)
52
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
for i in range (0, len(self.Contours)):
mask[self.Contours[i][1]:self.Contours[i][1]+self.Contours[i][3],
self.Contours[i][0]:self.Contours[i][0]+self.Contours[i][2]] = 0
for i in range (0,len(contours)):
x, y, w, h = cv2.boundingRect(contours[i])

if abs(w-160) < 10 and y > 50 and y < 250:


cv2.rectangle(self.frame,(x,y-20),(x+w,y+h-20),(0,255, 0),2)
cv2.putText(self.frame, "Liquid", (x,y) ,
cv2.FONT_HERSHEY_SIMPLEX,0.75, (255,0,0), 2)
if y > 170:
self.Liquid_State = 1
else:
self.Liquid_State = 0
else:
mask[y:y+h, x:x+w] = 0
if self.Liquid_State == 1:
self.V_Label.configure(image = self.Good_Photo)
else:
self.V_Label.configure(image = self.Fail_Photo)

if self.Lid_State == 1:
self.Lit_Label.configure(image = self.Good_Photo)
else:
self.Lit_Label.configure(image = self.Fail_Photo)

if self.Label_State == 1:
self.Coca_Label.configure(image = self.Good_Photo)
else:
self.Coca_Label.configure(image = self.Fail_Photo)
if ReadMemory(plc, 11, 6, S7WLBit) == True:
53
if time.time() - self.Timer > 3:
self.Result = ""
if self.Label_State == 0:
self.Result = self.Result + "Nhãn"
else:
pass

if self.Lid_State == 0:
self.Result = self.Result + " Nắp"
else:
pass

if self.Liquid_State == 0:
self.Result = self.Result + " Thể tích"
else:
pass
if self.Label_State == 0 or self.Lid_State == 0 or self.Liquid_State == 0:
WriteMemory(plc, 10, 5, S7WLBit, 1)
print("write")
if self.Liquid_State == 1 and self.Lid_State == 1 and self.Label_State == 1:
self.Result = self.Result + "Đạt"
self.Treeview_List.insert("","end", value = (str(self.Result),),
text=Current_Time())
WriteMemory(plc, 11, 6, S7WLBit, 0)
print(self.Result)
self.row = int(self.Sheet.cell(row = 1, column = 2).value) + 1
self.Sheet.cell(row = 1, column = 2).value = self.row
self.Sheet.cell(row = self.row,column = 1).value = Current_Time()
self.Sheet.cell(row = self.row,column = 2).value = self.Result
self.Workbook.save("/home/pi/Desktop/Project/Data.xlsx")
else:
pass
else:
self.Timer = time.time()
54
self.Display = Image.fromarray(self.frame)
self.Display = ImageTk.PhotoImage(self.Display)
self.my_Image1.configure(image = self.Display)
self.my_Image1.image=self.Display
except:
print(traceback.print_exc())
root.after(1, self.Run)

root=Tk()
app = myApp(root)
root.mainloop()

55

You might also like