Real-Time Analysis of Athletes
Real-Time Analysis of Athletes
Real-Time Analysis of Athletes
Research Article
Real-Time Analysis of Athletes’ Physical Condition in Training
Based on Video Monitoring Technology of Optical Flow Equation
Cuijuan Wang
Department of Public Physical Education, Huanghe Jiaotong University, Jiaozuo, Henan 454950, China
Received 5 November 2021; Revised 25 November 2021; Accepted 26 November 2021; Published 13 December 2021
Copyright © 2021 Cuijuan Wang. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is dedicated to the research of video motion segmentation algorithms based on optical flow equations. First, some
mainstream segmentation algorithms are studied, and on this basis, a segmentation algorithm for spectral clustering analysis of
athletes’ physical condition in training is proposed. After that, through the analysis of the existing methods, compared with
some algorithms that only process a single frame in the video, this article analyzes the continuous multiple frames in the video
and extracts the continuous multiple frames of the sampling points through the Lucas-Kanade optical flow method. We
densely sampled feature points contain as much motion information as possible in the video and then express this motion
information through trajectory description and finally achieve segmentation of moving targets through clustering of motion
trajectories. At the same time, the basic concepts of image segmentation and video motion target segmentation are described,
and the division standards of different video motion segmentation algorithms and their respective advantages and
disadvantages are analyzed. The experiment determines the initial template by comparing the gray-scale variance of the image,
uses the characteristic optical flow to estimate the search area of the initial template in the next frame, reduces the matching
time, judges the template similarity according to the Hausdorff distance, and uses the adaptive weighted template update
method for the templates with large deviations. The simulation results show that the algorithm can achieve long-term stable
tracking of moving targets in the mine, and it can also achieve continuous tracking of partially occluded moving targets.
tasks, space satellite tracking systems based on motion analysis, clustering algorithm to cluster the similarity matrix while add-
surface-to-air missile fire control systems, and automatic air- ing the structural information of the moving objects in the
craft landing [6, 7, 9]. In the process of exploring the method video to improve the clustering effect. And the feasibility of
of solving the ill-posed problem, many algorithms have the algorithm is verified through experiments.
emerged to overcome the ill-posed problem. For example, Du
et al. [8] found according to the optical flow field caused by 2. Real-Time Model of Athletes’ Physical
the same moving object should be continuous and smooth; that Condition in Training Based on Video
is, the phase on the same object should be continuous and Monitoring Technology of Optical
smooth. The speeds of neighboring points are similar, so the
change of the optical flow projected on the image should also Flow Equation
be smooth. A method of using additional constraints imposed 2.1. Distribution of the Solution Set of the Optical Flow
on the optical flow field, that is, the overall smoothing con- Equation. Based on the optical flow equation sorting statisti-
straint, is proposed to reduce the light flow. The calculation cal theory, it is a nonlinear signal processing method that
problem of the flow field is transformed into a variational prob- suppresses noise. It is suitable for those images that do not
lem. Singh et al. [9] considered that the basic equation itself have too many or obvious details and edges. Figure 1 shows
already has constraints on the optical flow field in the direction the spatial distribution of the solution set of the optical flow
of the gradient of the gray field at this point; it is proposed that equation. For those images where there are more prominent
additional smoothness constraints should make the optical points, more obvious edge lines, and other information,
flow field in the vertical direction along its gradient. The change median filtering is not good, because the filtered image will
rate in the heavy direction is the smallest. Based on this, a new lose a lot of detailed information [20–24].
iterative algorithm is derived. Dong et al. [10] believe that the The calculation method of Canny operator is developed
optical flow field calculation belongs to a kind of differential on the basis of other operators. The main idea is to introduce
problem. The Snake model was proposed by Ge et al. [11]. It two thresholds to determine whether the pixel is on the con-
was first applied to the field of lip language machine recogni- tour. A low threshold means that many edges in the image
tion. Its basic idea is to regard the boundary of the moving tar- will be detected, but many of them are not needed, so a high
get as a dynamic contour line and then introduce the concept threshold is introduced to filter out some insignificant edges.
of energy function to solve the minimization of the energy
function. The process is the process of finding the outline of ð +∞
the target object. The C-V model proposed by Allaoui et al. ðR FR + R F1 ÞðR FR − R F1 Þ
R= R FR + R F1 − dt,
[12] is another common model. This model mainly uses the 0 R FR R F1 ð1Þ
global information in the image and also achieves a good seg-
mentation effect. In addition, video segmentation algorithms F ½ΘðξÞ = min fδΓ FR + Γ F1 + ϕξg:
can also be divided into semiautomatic and fully automatic
segmentation. Fully automatic segmentation does not require The idea of smoothing template is to remove sudden
manual participation and is mostly used in places with high changes and filter out certain noise by averaging one point
security levels, such as bank surveillance and military surveil- and 8 surrounding points. Although it considers the role of
lance [13–15]. Semiautomatic segmentation algorithms are neighboring points, it does not consider the influence of the
mostly used in multitargets and complex motion backgrounds. position of each point. All 9 points are treated equally, so the
For complex motion backgrounds, you need to manually smoothing effect is not ideal, while the Gaussian template con-
determine the target’s position, contour, and other informa- siders the point closer to a certain point. The influence of the
tion, which facilitates tracking of the target in subsequent point should be greater, and the weighting coefficient is intro-
frames and improves the segmentation effect. However, this duced to make the smoothing effect more ideal.
method is complicated to operate and has poor real-time
performance [16–19].
In order to include enough motion information, this arti- j Dj
idf ðwi Þ = ln ,
cle densely sampled the video frames and filtered the spatio- Di : D j ∈ w j − 1
temporal feature points of these samples to remove some ð2Þ
K
feature points that are not obvious and difficult to track with
structural information and then use the optical flow method tf ðwi , Di Þ = N w1 ,D j × 〠 N wn ,D j × N:
n=1
to track these sampling points. In order to describe the motion
information of a moving object, the distance between two tra-
jectories is defined in a specific time window. At the same Image smoothing often blurs the boundaries and contours
time, a video motion segmentation algorithm based on spec- of the image, while sharpening can make the edges, contours,
tral clustering is proposed, which uses the observation that and details of the image clearer. The fundamental reason why
the trajectories of moving objects in the video are similar to the image becomes blurred after smoothing is that the image
perform clustering analysis on the trajectories of densely sam- has been averaged or integrated. Therefore, performing
pled points to realize the segmentation of moving objects. The inverse operations (such as differential operations) on the
similarity matrix is constructed by using the distance between image can make it clear. The commonly used method is gradi-
the trajectories. In addition, it is proposed to use the classic ent sharpening.
Advances in Mathematical Physics 3
Figure 1: The spatial distribution of the solution set of the optical flow equation.
Stage 1
60
50
40
30
Stage 5 20 Stage 2
10
Stage 4 Stage 3
Interval 1 Interval 3
Interval 2 Interval 4
accuracy of the detection results, in general, the interval between uisite for some other fields. In the field of computer vision,
model updates is as small as possible. The commonly used according to the different motion states of the lens and the
background modeling methods in the background difference object being photographed, motion videos can be divided into
method include statistical average background model, Gaussian four types of motion. Among them, the most common are the
mixture background model, W4 background model, and back- two forms: the lens does not move, the object is moving, and
ground modeling based on color information. The structure of both the lens and the object move. In video processing, the sit-
the Gaussian mixture model is relatively simple and easy to uation where both the lens and the object are moving is the
implement, and the background model is also very reliable. It most challenging. In addition, according to the number of
also has good robustness in complex scenes and is widely used. lenses and moving objects, it can be divided into multivision
The detection steps of the Canny operator are generally divided and multitarget situations.
into the following steps: first, we use Gaussian filter to denoise The idea of the algorithm is as follows: first search from
the image, then solve the gradient size and direction of the left to right and from top to bottom, the first black point
pixels in the original image according to the template, and found must be the upper left boundary point; then from this
finally detect the edge of the image with dual thresholds. Image boundary point, define the initial search direction as along.
segmentation algorithms based on edge detection are mostly At the bottom right, if the point at the bottom right is a black
applied to some images with obvious linear features, but this point, it is a boundary point. Otherwise, the search direction
algorithm has obvious shortcomings: for images with uneven is rotated 45 degrees counterclockwise, so that the first black
lighting and complex edges, these operators will have blurred point is found. Finally, this black point is used as a new
edges, discontinuous edges, and weak edges. boundary point.
And we compared with other two representative AQM
2.3. Decomposition and Clustering of Physical Condition algorithms including RED and PI. We rotate 90 degrees
Images. Video motion segmentation technology emerged on counterclockwise on the basis of the search direction and
the basis of the image segmentation technology of athletes’ continue to search for the next black point in the same
physical condition. Image segmentation technology only uses way until it returns to the original boundary point. The ulti-
the spatial information of the image. For sports video, the rela- mate goal of filtering is to filter out noise while not damaging
tionship between the frame before and after the image the image quality as much as possible. Here, a new filtering
sequence contains more useful information for people. method based on median filtering is proposed. From the
Because of the large amount of data in sports video, the video perspective of the entire filtering process, it is based on
is neither convenient for storage nor for transmission. In taking the maximum. The median filter of the value or the
video, moving objects often contain more information that minimum value, because the window is time-varying during
people are interested in. Therefore, in the field of video pro- the filtering process, is also called dynamic filtering. For the
cessing, accurate segmentation of moving objects is a prereq- input original image size of M × N, starting from 2 × 2
Advances in Mathematical Physics 5
window processing until traversing the entire image, the aver- image is not only affected by changes in lighting but also
age filtered image obtained is expressed by an expression. may be affected by various complex interference factors such
Figure 3 shows the decomposition and clustering process of as dynamic background elements.
physical condition images. However, the configuration of router parameters is
In addition to contour tracking algorithms, edge detec- greatly simplified. There may be several Gaussian distribu-
tion can use Laplacian and Sobel operators to quantify the tions of pixel values in a period of time. In this case, it is dif-
change rate of gray paralysis and determine the direction ficult to predict the true background using a single Gaussian
through the field of each microcline. model. This also makes the single-Gaussian model back-
In practical applications, due to the distortion of the ground method’s processing power drops sharply when
optical lens, a nonlinear model is introduced in order to dealing with complex scenes. In view of the above situation,
reflect the projection process more truly. For the most com- based on the background modeling of the single Gaussian
plex global motion, that is, when both the lens and the object model, the background modeling of the mixed Gaussian
are moving, motion compensation is required for the lens model suitable for complex scenes is proposed. Then, we
motion before segmentation. Therefore, motion estimation obtain the first k eigenvalues and corresponding eigenvec-
technology is of great significance to video segmentation, tors of the regularized Laplacian matrix and then use the
and accurate estimation of moving objects in the video is K-means clustering algorithm to cluster these eigenvectors
the basis for a good segmentation result. To estimate the and realize the segmentation of moving objects according
actual motion in space by plane motion, due to insufficient to the similarity between the trajectories.
conditions, some motion assumptions need to be made.
There are generally three assumptions: temporal continuity,
spatial continuity, and brightness invariance. In addition, 3. Application and Analysis of the Real-Time
some motion estimation methods put forward certain con- Model of Athletes’ Physical Fitness in
straints for the actual motion situation, such as optical flow Training Based on Optical Flow Equation-
method and block matching method. Based Video Monitoring Technology
2.4. Real-Time Analysis Model Factor Normalization. At 3.1. Feature Extraction of Video Monitoring Data. This paper
present, there are many real-time analysis and detection uses experiments to verify the feasibility of the above
methods for feature points in the field of computer vision, algorithm. The software environment of the experiment is
and the motion information in the video is mostly described VS2010 and OpenCV, and the hardware conditions are
by feature points, so the selection of feature points is a key Intel(R) Core(TM) i3-3240, 3.40 GHz, 4 G memory. The pre-
step in the field of computer vision. processing is mainly to simplify the original vector data into
If the number of feature points is too small, it is not data suitable for processing. In the actual measurement pro-
enough to provide the required motion information, resulting cess, most of the data generated are complex, and the amount
in the failure of motion segmentation, but too many feature of data is huge and has no regularity. These data are called
points will affect the computational efficiency of the algorithm. original vector data. The preprocessing process generally uses
By extracting the trajectory information of densely sampled data filtering and only selects the data that we are interested in
points, a similarity matrix between trajectories is constructed, and suitable for subsequent processing.
and K-means clustering is used to segment moving objects. Generally, shapes and textures are selected to represent
The feature points are tracked by optical flow method. In the characteristics of the data. The quality of the data map-
order to avoid drifting during the tracking process, only 15 ping determines the effect of the visualization. Drawing the
frames are tracked. In addition, the concept of distance flow field is mostly applied to the computer graphics theory,
between trajectories is defined, and the similarity matrix is and the mapped data is drawn into an image that is easier for
constructed based on this. Figure 4 shows the normalization the observer to understand. Figure 5 shows the error fitting
of real-time analysis model factors. distribution of the video monitoring data.
When some high-dimensional data sets are classified by It can be seen that the background wall illumination
spectral clustering algorithm, the Euclidean distance between between the first and second frames does not change much,
sample points is generally selected to construct the similarity and the optical flow of moving objects can be effectively
matrix. However, in the experimental data in this article, some detected. However, between the third and fourth frames,
moving objects have complex motion patterns, and sample pedestrians pass by and illuminate the background wall. As
point trajectories are intricate and simple. The Euclidean dis- well as the increase in the speed of pedestrian movement
tance cannot meet actual needs. After analyzing the trajectory between frames, the accuracy of optical flow calculation
information of the sampling points, the following method is becomes smaller. The Kanade algorithm has a large change
adopted. In order to construct the similarity matrix between in the background illumination of the image sequence, and
the sample points, the distance between the trajectories is first the calculation accuracy of the optical flow in the motion
defined. For scenes where the light intensity is constant or the is not continuous enough. The Kanade algorithm is more
light only changes slowly, the single-Gaussian background adaptable to the environment than the Hom-Schunk algo-
model can effectively represent the background image through rithm; that is, the “anti-noise” performance is better. In the
the automatic update of the background model. However, in a block motion displacement matching algorithm, the current
complex scene, the pixel value at the same position in the frame image is divided into many M × M blocks, and it is
6 Advances in Mathematical Physics
Lens Lens
Type Lens
Type Type Lens
Lens
Type Lens
Type Type Lens Lens
Type
Exp Exp
Pro
Pro Pro
Up
Video Video Pro Pro
Pro Up Up Up
video Video Pro Up
Exp Exp
Video
Up Up
Video Up
Video
Exp
Move
Move Move
The direction through the
Move Change rate of gray paralysis
field of each microcline
Move Move
Move
60
54
Real-time analysis model factor normalization (%)
50
43
40
39
33
32 32
30
28 28
20 21 21
17
15 15
12 12 12 12
10 10 9
0
0 1 2 3 4 5 6
Node number
Interval 1 Interval 1
Interval 1 Interval 1
assumed that the motion displacement of all pixels in each frame image; then, the relative displacement between the
block is the same. current pixel block in the current frame image and its
Below, we search for the pixel block that best matches matching block is the relative motion displacement vector
each macro block in the current frame image in the adjacent of the pixel block. The motion vector of the background part
Advances in Mathematical Physics 7
2.5 2.5
Video monitoring data error weight index (%)
2 2
1.5 1.5
1 1
0.5 0.5
0 0
–0.5 –0.5
–1 –1
–1.5 –1.5
–2
–2 –1.5 –1 –0.5 0 0.5 1 1.5 2 2.5 3
Indicator value
of the image is smaller than the motion vector of the fore- Generally, the process of using image processing technol-
ground moving target, so the motion vector of the target ogy to eliminate jitter in video images can be divided into three
object in the BMV is partially eliminated. Here, the OTSU major parts according to functions: motion estimation,
adaptive boudoir value method is used to determine a seg- motion filtering, and motion compensation. First, it is
mentation value, and the BMV image is binarized according necessary to calculate the global motion vector between two
to the segmentation threshold value. The pixel value of the adjacent frames of images, and then filter them, and finally
point greater than the value is set to 0; otherwise, it is 255, perform motion compensation on the second frame of image
and a new one is obtained. to complete the elimination of image jitter. Edge detection is
one of the most basic operations used to detect significant
3.2. Real-Time Model Simulation of Athletes’ Physical Fitness. changes in a part of an image. In one dimension, the step edge
It can be known from the experiment that the principle of is related to the local peak of the first derivative of the image.
the method is simple, and the algorithm is easy to implement. Gradient is usually used as a measure of function change.
In addition, different audible values can be set in different These theoretical and experimental results show that if it
situations, or the Great Law (OTSU) can be directly used to can meet the user’s requirements for system stability and
determine the audible value. This makes the result obtained other major QOS indicators in network congestion. We
can directly reflect the size, shape, and position of the sports can regard an image as an array of sampling points as a con-
daily mark, and the algorithm accuracy is relatively high. tinuous function of image intensity. Therefore, similar to the
However, it is difficult to extract the background image, and one-dimensional time, the discrete approximation function
the algorithm is susceptible to external conditions such as light of the gradient can be used to detect the significant change
and weather. These conditions cause the gray value of the of the image gray value. The feature-based method is not
original background image to change, which requires that affected by the overall change of the image gray level. The
the background image must be updated in real time. effect of acquiring features often depends on the settings of
In addition, we also evaluated the performance of iDrop- other operators and parameters and is easily affected by
tail under mixed flow conditions. When the scale of the noise during the extraction process, so the required features
search window is larger than the gray scale change scale, may be missed, or other features are added, which cause dif-
multiple minimum values will appear on the surface of the ficulties in solving the matching relationship and cause mis-
SSD, and the peak value will be “offset” by the weighted least matches. These problems can be solved by improving the
square method, resulting in incorrect speed estimation, and algorithm. Figure 7 is a comparison of the evaluation errors
the obtained confidence measure is also very low. However, of athletes’ physical condition.
the value of the confidence measure reflects the reliability of Although the jitter of the background has been eliminated
the speed estimation; for the selected value after the regular- in the compensated image, there is still a part of the motion of
ization parameter, when the response in the search window the interference area. We know that the motion area of the
is very small, the final speed estimate is very sensitive to target is connected. It uses an image segmentation algorithm
the later selected value. Figure 6 is the confidence measure- based on connected domain analysis to segment the image
ment curve of athletes’ physical condition. to segment the target motion area, and the segmented motion
8 Advances in Mathematical Physics
3
Confidence measure of athlete physical condition (%)
–1
–2
–3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Training set
target image is as text. Because the SUSAN algorithm uses In this paper, different detection methods are used to pro-
low-level image processing methods to detect feature points cess video sequences in different scenes, but whether it is
with small core values, it has great advantages in antinoise background subtraction, interframe difference method in
and computing speed and is suitable for high-noise locations static scenes, or optical flow method in dynamic scenes. All
in coal mines. of them process the gray value of the image pixels, so the col-
The selection of the algorithm scale is related to the size lected color image must be converted into a gray image
of the input image. When the size of the image sequence is before the experiment. Figure 8 shows the result of Gaussian
constant, a more suitable scale can be determined to enable filtering gray scale processing for video monitoring.
the Retinex algorithm to achieve a better enhancement In order to estimate the optical flow of the corner points
effect. In practical applications, we can set the scale parame- corresponding to the next frame of image, we also need to
ter according to our own needs. For example, when we want use the corner point matching algorithm to get the matching
to get more detailed image information, we only need to set points in the next frame of image. The template matching
the scale parameter to a small scale. The scale Retinex algo- algorithm in the corner matching algorithm has the advan-
rithm cannot guarantee that both detail enhancement and tages of easy template selection, simple calculation, easy
color fidelity can achieve better results at the same time. implementation, and high accuracy and is widely used in
image registration. Normalized cross-correlation (NCC) is
3.3. Example Application and Analysis. In the actual shoot- the most outstanding representative of template matching
ing and acquisition process of the video image sequence used algorithms.
in the experiment, due to the complexity of the imaging When the corner points are matched, first calculate the
environment, there are some random and disorderly noises correlation coefficient for the corner point in image 1 and
in the video image sequence. Therefore, for the accuracy of each corner point in the rectangular window in image 2,
the experiment, the video image sequence should be proc- and use the corner point with the largest NCC value as the
essed before the image processing. It is necessary to perform matching point to obtain a matching point set; if the same
filtering processing. In this paper, the Gaussian filter is used matching point pair is searched in the two matching point
to preprocess the video image sequence, and the filter win- sets, the same corner point is regarded as a matching point.
dow size is 5 × 5 (3 × 3 and 7 × 7 sizes can also be selected). Figure 9 is a comparison of video monitoring Gaussian fil-
The Gaussian filter is very effective in suppressing the noise tering correlation coefficients.
that obeys the normal distribution, and it is a very effective We select the video image sequence collected by the
low-pass filter in both the spatial domain and the frequency indoor surveillance camera, that is, the video image sequence
domain. In addition, if the Gaussian filter is used, the system under the static background for experiment, select the previ-
function is smooth, and ringing is avoided. ous frame image of the first frame image where the moving
While simplifying the router configuration, it achieves target exists as the background image, and then read the video
the same or superior performance as the AQM algorithm. image through the software, collect each frame of image from
Advances in Mathematical Physics 9
5
Errors in the evaluation of athletes‘ physical condition (%)
4.5
3.5
2.5
1.5
1 Group 4
0.5 Group 3
0 Group 2
Sample 1
Sample 2 Group 1
Sample 3
Sample 4
Database 4
Database 3
Database 2
Type 1
Type 2
Type 3 Database 1
Type 4
0-10 50-60
10-20 60-70
20-30 70-80
30-40 80-90
40-50 90-100
the video in real time, subtract each frame of image from the It can be used as an effective solution for network con-
background image, and perform threshold processing. Here, gestion control. The final area is the moving target area.
we randomly select three frames from the video image (1) If the MBD point obtained by Step 1 is on the 3 × 3 rect-
sequence, subtract each selected frame from the background angular box at the center, search for the 3 × 3 rectangular
image, and perform stop processing. box centered on this MBD point to obtain the MBD point,
10 Advances in Mathematical Physics
1.0 layered optical flow method to estimate the optical flow vector
Video monitoring filter correlation coefficient
[7] S. Kanagamalliga and S. Vasuki, “Contour-based object track- [22] N. A. Mohamed and M. A. Zulkifley, “Moving object detection
ing in video scenes through optical flow and gabor features,” via TV-L1 optical flow in fall-down videos,” Bulletin of Electri-
Optik, vol. 157, pp. 787–797, 2018. cal Engineering and Informatics, vol. 8, no. 3, pp. 839–846,
[8] B. Du, S. Cai, and C. Wu, “Object tracking in satellite videos 2019.
based on a multiframe optical flow tracker,” IEEE Journal of [23] B. Jiang, X. Yin, and H. Song, “Single-stream long-term optical
Selected Topics in Applied Earth Observations and Remote flow convolution network for action recognition of lameness
Sensing, vol. 12, no. 8, pp. 3043–3055, 2019. dairy cow,” Computers and Electronics in Agriculture,
[9] G. Singh, A. Khosla, and R. Kapoor, “Crowd escape event vol. 175, article 105536, 2020.
detection via pooling features of optical flow for intelligent [24] O. I. Al-Sanjary, A. A. Ahmed, A. A. Jaharadak, M. A. Ali, and
video surveillance systems,” International Journal of Image, H. M. Zangana, “Detection clone an object movement using an
Graphics and Signal Processing, vol. 11, no. 10, pp. 40–49, optical flow approach,” in 2018 IEEE Symposium on Computer
2019. Applications & Industrial Electronics (ISCAIE), pp. 388–394,
[10] C. Z. Dong, O. Celik, F. N. Catbas, E. J. O’Brien, and S. Taylor, Penang, Malaysia, 2019.
“Structural displacement monitoring using deep learning-
based full field optical flow methods,” Structure and Infrastruc-
ture Engineering, vol. 16, no. 1, pp. 51–71, 2020.
[11] Z. Ge, F. Chang, and H. Liu, “Multi-target tracking based on
Kalman filtering and optical flow histogram,” in 2017 Chinese
Automation Congress (CAC), pp. 2540–2545, Jinan, China,
2017.
[12] R. Allaoui, H. H. Mouane, Z. Asrih, S. Mars, and I. El Hajjouji,
“FPGA-based implementation of optical flow algorithm,” in
2017 International Conference on Electrical and Information
Technologies (ICEIT), pp. 3–5, Rabat, Morocco, 2017.
[13] Y. Ma, “An object tracking algorithm based on optical flow
and temporal–spatial context,” Cluster Computing, vol. 22,
no. S3, pp. 5739–5747, 2019.
[14] G. Deng, Z. Zhou, S. Shao, X. Chu, and C. Jian, “A novel dense
full-field displacement monitoring method based on image
sequences and optical flow algorithm,” Applied Sciences,
vol. 10, no. 6, p. 2118, 2020.
[15] A. Ullah, K. Muhammad, J. Del Ser, S. W. Baik, and V. H. C. de
Albuquerque, “Activity recognition using temporal optical
flow convolutional features and multilayer LSTM,” IEEE
Transactions on Industrial Electronics, vol. 66, no. 12,
pp. 9692–9702, 2018.
[16] J. Zhou and C. Kwan, “Anomaly detection in low quality traffic
monitoring videos using optical flow,” in Pattern Recognition
and Tracking XXIX. International Society for Optics and Pho-
tonics, Orlando, Florida, United States, 2018.
[17] S. S. Sengar and S. Mukhopadhyay, “Detection of moving
objects based on enhancement of optical flow,” Optik,
vol. 145, pp. 130–141, 2017.
[18] A. Ladjailia, I. Bouchrika, H. F. Merouani, N. Harrati, and
Z. Mahfouf, “Human activity recognition via optical flow:
decomposing activities into basic actions,” Neural Computing
and Applications, vol. 32, no. 21, pp. 16387–16400, 2020.
[19] J. Javh, J. Slavič, and M. Boltežar, “Experimental modal analy-
sis on full-field DSLR camera footage using spectral optical
flow imaging,” Journal of Sound and Vibration, vol. 434,
pp. 213–220, 2018.
[20] D. O. Yimin, C. Fudong, L. I. Jinping, and C. Wei, “Abnormal
behavior detection based on optical flow trajectory of human
joint points,” in 2019 Chinese Control And Decision Conference
(CCDC), pp. 653–658, Nanchang, China, 2019.
[21] H. Chen, S. Ye, O. V. Nedzvedz, and S. V. Ablameyko, “Appli-
cation of integral optical flow for determining crowd move-
ment from video images obtained using video surveillance
systems,” Journal of Applied Spectroscopy, vol. 85, no. 1,
p. 23, 2018.