Lit Review Rainfall Springer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/359750167

Rainfall Prediction Using Machine Learning Models: Literature Survey

Chapter · January 2022


DOI: 10.1007/978-3-030-92245-0_4

CITATIONS READS
2 1,668

5 authors, including:

Eslam A. Hussein Mehrdad Ghaziasgar


Inter-university Institute for Data Intensive Astronomy University of the Western Cape
9 PUBLICATIONS   40 CITATIONS    39 PUBLICATIONS   109 CITATIONS   

SEE PROFILE SEE PROFILE

Christopher Thron Mattia Vaccari


Texas A&M University Central Texas University of Cape Town
113 PUBLICATIONS   532 CITATIONS    444 PUBLICATIONS   24,020 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Cosmic Surveys View project

Covering random points with disks View project

All content following this page was uploaded by Eslam A. Hussein on 18 December 2022.

The user has requested enhancement of the downloaded file.


Rainfall Prediction Using Machine Learning
Models: Literature Survey

Eslam A. Hussein1? , Mehrdad Ghaziasgar1 , Christopher Thron2 , Mattia


Vaccari3 , and Yahlieel Jafta1
1
Department of Computer Science, University of the Western Cape, Cape Town,
7535, South Africa
[email protected] (E.A.H.), [email protected] (M.G.),
[email protected] (Y.J.)
2
Department of Science and Mathematics, Texas A&M University-Central, Texas,
Killeen, TX 76549, USA
[email protected]
3
Department of Physics and Astronomy, University of the Western Cape, Cape
Town, 7535, South Africa;
[email protected]

Abstract. Research on rainfall prediction contributes to different fields


that have a huge impact on our daily life. With the advancement of
computer technology, machine learning has been extensively used in the
area of rainfall prediction. However, some papers papers suggest that
applications of machine learning in different fields are deficient is some
respects. This chapter performs a review on 66 research papers that use
machine learning tools to predict rainfall. The papers are examined in
terms of the source of the data, output objective, input features, pre-
processing, model used, and the results. The review shows questionable
aspects present in many studies. In particular, many studies lack a base-
line predictor for comparison. Also, many references do not provide error
bars for prediction errors, so that the significance of differences between
prediction methods cannot be determined. In addition, some references
utilize practices that permit data leakage, leading to overestimates of
predictive accuracy.

Keywords: forecasting, short and long term data, geophysical, deep


learning, sequence prediction, data leakage, baselining, error bars, shuf-
fling, seasonality.

1 Introduction
Natural processes on Earth can be classified into several categories, including
hydrological processes like storm waves and groundwater; biological processes
?
E.A.H. acknowledges financial support from the South African National Research
Foundation (NRF CSUR Grant Number 121291 for the HIPPO project) and from
the Telkom-Openserve-Aria Technologies Center of Excellence at the Department of
Computer Science of the University of the Western Cape.
like forest growth; atmospheric processes like thunderstorms and rainfall; human
processes like urban development; and geological processes like earthquakes. The
field of physical geography seeks to investigate the distribution of the different
features/parameters that describe the landscape and functioning of the Earth
by analyzing the processes that shape it. These features/parameters have been
referred to as geophysical parameters in the literature [38].
Rainfall is a key geophysical parameter that is essential for many applications
in water resource management, especially in the agriculture sector. Predicting
rainfall can help managers in various sectors to make decisions regarding a range
of important activities such as crop planting, traffic control, the operation of
sewer systems, and managing disasters like droughts and floods [32]. A number
of countries such as Malaysia and India depend on the agriculture sector as a
major contributor to the economy [32, 59] and as a source of food security. Hence,
an accurate prediction of rainfall is needed to make better future decisions to
help manage activities such as the ones mentioned above.
Rainfall is considered to be one of the most complicated parameters to fore-
cast in the hydrological cycle [32, 34, 53]. This is due to the dynamic nature of
environmental factors and random variations, both spatially and temporally, in
these factors [32]. Therefore, to address random variations in rainfall, several ma-
chine learning (ML) tools including artificial neural networks (ANN), k-nearest
neighbours (KNNs), decision trees (DT), etc. are used in the literature to learn
patterns in the data to forecast rainfall. In this chapter, a review of past work
in the area of rainfall prediction using ML models is carried out.
A number of related review papers exist as follows. The authors in [52] fo-
cused on reviewing studies that use ML for flood prediction, which closely resem-
bles rainfall prediction. The authors in [71] focused on the use of ML for generic
spatiotemporal sequence forecasting. Finally, the authors in [59] conducted a
survey on the use of ML for rainfall prediction: however the study was limited
to rainfall prediction in India.
This chapter serves as an addition to the field by surveying recent relevant
studies focusing on the use of ML in rainfall prediction in a variety of geographic
locations from 2016–2020. After detailing the methods used to forecast rainfall,
one of the important contributions of this chapter is to demonstrate various
pitfalls that lead to an overestimation in model performance of the ML mod-
els in various papers. This in turn leads to unrealistic hype and expectations
surrounding ML in the current literature. It also leads to an unrealistic under-
standing of the advancements in, and gains by, ML research in this field. It is
therefore important to clearly state and demonstrate these pitfalls in order to
help researchers avoid them.
The rest of this review is organized as follows: Section 2 discusses the method-
ology used to survey and review the literature which defines the discussion
framework used in all subsequent sections; Section 3 describes the data sets
used; Section 4 provides a description of the output objective in the various
papers; Sections 5 – 7 describe the input features used, common methods of pre-
processing and the ML models used; Section 8 summarizes the results obtained
in various studies; and Section 9 then provides a discussion of the procedures
used, specifically pointing out the pitfalls mentioned before towards obtaining
over-estimated and unrealistic results. The section that follows concludes the
paper.

2 Methodology

This chapter carries out an in-depth review of relevant literature to reveal the
different practices authors take to predict rainfall. The review covers several
aspects which relate to the input into, output from, and methods used in the
various systems devised in the literature for this purpose. The review specif-
ically focuses on studies that use supervised learning for both regression and
classification problems.
Google scholar was used to collect papers from 2016 to 2020, with the follow-
ing key words: (”machine learning” OR ”deep learning”) AND (”precipitation
prediction” OR ”rainfall prediction” OR ”precipitation nowcasting”). Almost
1240 results were obtained, and of these only supervised rainfall prediction pa-
pers that used meteorological data from e.g. radar, satellites and stations were
selected, while papers that used data from normal cameras e.g. photographs
were excluded. Even though this review focuses on the prediction of rainfall, the
methods used to achieve this can be extended and applied to other geophysical
parameters like temperature and wind. Hence, the conclusions and discussions
of this chapter can be adapted to other parameters.
The total number of reviewed papers are 66, which are a combination of
conferences and journal papers published from 2016–2020, except for one paper
[69] which was published in 2015 and is a seminal work in this field. Figure 1
shows the reviewed studies per year. Tables which summaries the reviewed paper
can be found in Appendices A and B.

Fig. 1. Pie chart showing proportions by publication year for papers in this review .
Figure 2 shows the generic structure of supervised ML models. This structure
was used as a guideline to construct a set of questions used to systematically
categorize and analyze the 66 papers. The questions are as follows:

1. What data sets are used and where are they sourced?
2. What is the output objective in the various papers in terms of what the goal
of prediction/forecasting?
3. What input features are extracted from the data set(s) to be used to achieve
the output objective?
4. What pre-processing methods are used prior to classification/regression?
5. What ML models are used to achieve classification/regression towards the
output objective?
6. What results were obtained from the above-mentioned steps, and how were
they reported?

Fig. 2. Basic flow for building machine learning (ML) models [52]

These questions provide the framework for the rest of this paper. Sections
3–8 address questions 1–6 in sequence. Section 9 discusses the findings in the
previous six sections, and Section 10 provides conclusions.

3 Data Sets

This section provides a breakdown of the data sets used in the 66 studies sur-
veyed, based on the sources of the data sets, availability, and geographical loca-
tions where the data sets were collected.
Figure 3 (left) provides a breakdown of the studies based on the sources/availability
of the data sets used in those studies. About 75% of the studies used private data,
sourced from meteorological stations of their prospective countries [61, 62, 85, 86,
83, 72, 56, 7, 27, 47, 39, 69, 73, 37, 6, 18, 76, 68, 79, 77, 70, 26, 16, 87, 28, 23, 20, 41, 63,
74, 4, 1, 64, 67, 8, 49, 10, 55, 2, 40, 31, 29, 81, 5, 19, 11, 14, 48, 80, 30, 50]. Most of these
data sets are not readily available for use. Only 10% of the studies use data
sourced from freely available sources such as Kaggle (www.kaggle.com), and the
National Oceanic and Atmospheric Administration (NOAA) [65, 15, 57, 84, 60,
22, 5]. The remaining 13% of studies in this review use data from both private
and publicly available sources [78, 82, 25, 17, 33, 12, 13, 42, 3].
Figure 3 (right) summarizes the geographical regions included in this review.
The continent of Asia accounts for around 68% of all studies [62, 82, 83, 47, 39, 69,
65, 70, 12, 13, 74, 4, 8, 49, 22, 42, 10, 29, 19, 11, 48, 80, 86, 17, 27, 33, 37, 18, 76, 68, 15,
79, 77, 26, 28, 81, 30, 7, 23, 41, 63, 64, 40, 50]. Of these, studies that focus on China
and India make up almost one quarter and one tenth respectively of all studies in
this review. The remaining Asian studies focus on countries such as Iran, South
Korea and Japan.
The rest of the chart is distributed as follows: the Americas make up 12.1%
of studies [78, 72, 73, 57, 84, 16, 87, 14]; Europe accounts for 9.1% [25, 6, 20, 67, 55,
3]; Australia comprises 6.1%; [56, 1, 2, 31], and the remaining 4.5% either involve
multiple regions, or involve the use of the whole global map [61, 60, 5].

Fig. 3. Pie chart of the percentage of data sets in this survey in terms of
source/availability (top) and geographical region (bottom).
4 Output Objectives

The output objectives of rainfall forecasting studies can be analyzed in terms


of three factors: the forecasting time frame of the output; whether the output
is continuous or discrete; and the dimensionality of the output. The forecast-
ing time frame of the output specifies the time span of the forecast made, i.e.
hourly, daily, monthly etc. The output can also be discrete (e.g. classification into
“Rain”/“No Rain” classes), or continuous (e.g. predicting the quantity of rain),
or both. Finally, the output can be 1-dimensional (1D) in the form of a single
number or label representing a rainfall measure or category, or 2-dimensional
(2D) in the form of a geospatial map of rainfall measures or categories on a grid
of the geographical location under study.
In terms of the forecasting time frame, the studies can be broken down into
those that make long-term predictions and those that focus on making short-
term predictions. In this review, long-term prediction is defined as predictions
of one months up to a year ahead, while short-term prediction can be a few
minutes ahead (e.g. 5–15 minutes), up to one or more days ahead. Figure 4 (left),
shows the distribution of papers’ forecasting time frames. Of the 66 reviewed
papers, 30 papers (45%) make long-term predictions, the majority of which focus
on monthly forecasting [20, 41, 63, 74, 4, 1, 64, 67, 8, 49, 22, 42, 10, 55, 2, 40, 31, 29,
81, 5, 14, 11, 19, 48, 80, 50, 3]. Only two studies focus on seasonal forecasting [28,
23],while a single study aims towards yearly forecasting [30]. As for studies that
focus on short-term prediction, these are broken down nearly evenly between
daily [61, 62, 83, 25, 72, 56, 7, 33, 15, 57, 87, 12, 13], hourly [78, 85, 86, 82, 17, 27, 26,
84, 16, 60], and one or more minutes ahead [47, 39, 69, 73, 37, 65, 6, 18, 76, 68, 79,
77, 70].
In terms of the type of output i.e. discrete (classification) or continuous (re-
gression), Figure 4 (right) shows the distribution between the different output
types. The majority carried out regressions to obtain continuous output [28, 23,
20, 41, 63, 74, 4, 1, 64, 67, 8, 49, 22, 42, 10, 55, 2, 40, 31, 29, 81, 5, 14, 11, 19, 48, 80, 61,
62, 78, 85, 86, 82, 76, 79, 70, 26], while slightly more than one third carried out
classification into discrete classes [83, 25, 72, 56, 7, 17, 27, 47, 39, 33, 69, 73, 37, 65,
18, 68, 16, 60, 87, 12, 13, 30, 50, 3]. Only 3 studies applied both classification and
regression [6, 77, 84].
For studies that applied classification,mostly carried out binary classification
[83, 25, 72, 56, 7, 17, 27, 47, 39, 33, 69, 73, 37, 65, 18, 68, 60, 87, 12], with the major-
ity of these classified into “Rain”/“No Rain” classes. Relatively fewer studies aim
towards carrying out classification into multiple classes [33, 16, 13, 30, 50],varying
from three to five classes.
Finally, for the dimensionality of the output, 54 out of 66 studies produce 1D
output [28, 23, 8, 49, 22, 42, 10, 55, 2, 40, 31, 30, 50, 85, 86, 82, 83, 25, 72, 56, 3, 61, 62,
78, 7, 17, 27, 47, 39, 33, 57, 26, 84, 16, 60, 87, 12, 13], with the remaining 12 studies
producing a series 2D images as output [69, 73, 37, 65, 6, 18, 76, 68, 15, 79, 77, 70].
Of the studies with 2D output, all except one [15] involve short-term prediction
intervals of 10 minutes or less.
Fig. 4. Pie chart of the percentage of data sets in this survey in terms of forecasting
time frame (top) and the discrete (classification)/continuous (regression) nature of the
prediction output (bottom).
A connection between forecasting time frame and the discrete/continuous
nature of the output can be observed. In general, studies involving longer-term
predictions tend to make use of regression which produces continuous output,
whereas on short-term time frames, studies tend towards using classification that
gives discrete output. Specifically, 27 out of the 30 papers that focus on long-
term prediction carry out regression [28, 23, 20, 41, 63, 74, 4, 1, 64, 67, 8, 49, 22, 42,
10, 55, 2, 40, 31, 29, 81, 5, 14, 11, 19, 48, 80], and 23 of the 36 papers that focus on
short-term prediction carry out classification [83, 25, 72, 56, 7, 17, 27, 47, 39, 33,
69, 73, 37, 65, 6, 18, 68, 84, 16, 60, 87, 12, 13]. This relation may be explained by
the fact that longer-term studies usually aim at predicting averages over several
days (up to a month), while short-term studies predict instantaneous conditions.
Multi-day averaged data assumes a continuous range of values, while in instan-
taneous rainfall datasets most values are null. It follows that classification into
rain/no rain is useful for short term, but not for long-term prediction.

5 Input Features

In order to make future predictions, studies make use of data from one or more
time steps (called “lags” or “time lags”) as input features to predict one or more
future lags. For example, to predict rainfall at lag T , two previous time lags
(T − 1) and (T − 2) may be used.
The actual input features in each lag vary across studies. In general, the
input features used in the studies in this review were found to be of two types:
1D input features in which each time lag in the data set represents one or a set
of geophysical parameters that have been collected at static known locations i.e.
meteorological stations; and 2D input features in which each time lag in the data
set is a 2D spatial map of values representing rainfall in the geographical area
under review, usually collected by satellite or radar.
1D input features used include geophysical parameters such as tempera-
ture, humidity, wind speed and air pressure [63, 4, 8, 22, 3, 56, 47, 39, 44, 81]. In
a smaller number of cases,climatic indices such as the Pacific Decadal Oscilla-
tion may also be used [28, 1, 42, 30, 78]. Studies that use 1D input features tend
to use a relatively small number of overall input features, ranging from 2–12
features used for prediction.
With 2D input features, one or more images are taken as input features,
depending on the number of time lags used as input e.g. two time lags used as
input implies that two images are used as input. The number of time lags used
as input is henceforth referred to as the “sequence length”.
There is no rule of thumb for how many time lags should be used as input, and
this is mostly selected arbitrarily, and in fewer cases via trial and error. The vast
majority of the studies under review select a fixed sequence length. The sequence
length can be viewed as a hyper-parameter that affects the prediction outcome,
but the optimization of this hyper-parameter is not investigated in the studies
under review. The studies under review were found to be more focused on the
machine learning component, mostly at devising new deep learning architectures,
than selecting and tuning other aspects of their systems.
The most common sequence lengths used are 5 frames [73, 37, 65, 79] and 10
frames [73, 37, 65, 79]. Other sequence lengths are also used, such as 2 [73], 4 [65],
7 [79] and 20 [37].
Studies that use 2D input features tend to use a relatively large number
of input features. This can be attributed to the fact that the feature vectors
produced are associated with one or more 2D images, resulting in vectors of
size (Image width × Image height × Sequence length). Overall, the number of
features can grow as high as several thousands.
Typically, 1D or 2D inputs are used to predict 1D or 2D outputs, respec-
tively. As noted in the previous section, longer-term predictions tend to make
1D predictions, so it follows these studies also tend to use 1D data [28, 23, 20,
41, 63, 74, 4, 1, 64, 8, 49, 22, 42, 10, 55, 2, 40, 29, 81, 14, 11, 19, 48, 50, 3], while those
that make shorter-term predictions tend towards the use of 2D data [69, 73, 37,
65, 6, 18, 76, 68, 15, 79, 77, 70, 13]

6 Input Data Pre-processing

Before ML tools are applied to make predictions on the available data, the in-
put data is usually pre-processed to reformat the data into a form that will
make training of, and prediction by, the ML tool(s) easier and faster. The pre-
processing techniques usually applied in geophysical parameter forecasting can
be broken down into three broad categories, namely data imputation; feature
selection/reduction; and data preparation for classification. The following sub-
sections describe these categories, as well as their application in the papers in
this review.

6.1 Data Imputation

Data sets are regularly found to have missing data entries, which is caused by
a range of factors such as data corruption, data sensor malfunction etc. This
is a serious issue faced by researchers in data mining or analysis, and needs to
be addressed as part of pre-processing before feature selection/preparation and
training.
The techniques used to infer and substitute missing data are collectively
referred to as data imputation techniques. Data imputation is challenging and
is an on-going research area. In the papers in this review, it was found that
very little focus was placed on this problem, with most of the studies making
use of simple statistical techniques such as averaging to interpolate missing data
entries [74, 31, 14, 11, 83, 56]. While not used in the papers in this review, more
advanced data imputation techniques exist beyond the use of simple statistics,
such as the use of ML to impute the data. The interested reader may refer to
[75, 66, 58].
6.2 Feature Selection/Reduction

Feature selection/reduction aims to determine and use salient features in the


data, and disregard irrelevant features in the data. This helps to reduce train-
ing time, decrease the model complexity and increase its performance. In the
papers in this review, it is observed that feature selection is carried out either
automatically or manually.
For automatic feature selection, various algorithms are used to determine the
most salient features in the data. The most common method used in the papers
in this review involved the use of deep learning techniques such as ANNs and
convolutional neural networks (CNNs), to select/reduce features automatically,
most especially when high-dimensional data such as radar and satellite images
was used [69, 73, 37, 65, 18, 68, 15, 79, 77, 70, 60, 87, 12, 13, 31, 80]. The use of deep
learning techniques was found to be much more common with short-term data
sets which are generally much larger, therefore making it possible to achieve
convergence on deep networks. Another category of ML tools used for automatic
feature selection includes ensemble methods like random forests (RFs) which
automatically order features in terms of importance, as used in [22, 11, 3, 78, 82,
83, 25, 72, 7]. Finally, principle components analysis (PCA) has also been used
to reduce features in [28, 61, 86, 25, 27, 57].
As regards manual feature selection, researchers may either use prior expe-
rience and trial and error to manually select relevant features such as in [23, 20,
23, 20, 41, 74, 64, 67, 8, 55, 31, 29, 14], or use correlation analysis methods such as
auto correlation to indirectly inform the manual feature selection process as in
[28, 63, 1, 49, 22, 42, 2, 40, 30]. Where images are used, image cropping and resiz-
ing is applied to, respectively, dispose of irrelevant/static image segments and
reduce the number of features [69, 73, 37, 65, 18, 68, 15, 79, 77, 70].
Manual feature selection is much more common with long-term data sets,
with very few long-term prediction studies in this review making use of automatic
feature selection methods. This is partly attributed to the relatively smaller
amount of data available in these sets, as mentioned before, which makes it
challenging, or even rules out, the application of e.g. deep learning methods for
automatic feature selection.
The rotation of the earth around the sun can cause data to exhibit a seasonal
behavior on an annual basis i.e. they exhibit annual periodicity [24, 9]. This is
most prominent in long-term data sets and less prominent in shorter-term data
sets. Addressing seasonality in long-term data sets is critical when traditional
time series models are used, since these models assume stationarity [24, 54], while
seasonality and trends in general makes time series non-stationary. Converting
data from a non-stationary to a stationary state involves is a process of gener-
ating a time series with statistical properties that do not change over time. For
further information about seasonal and non-stationary data sets and the conver-
sion of non-stationary to stationary time series, the interested reader is referred
to [54]. Another way to deal with seasonality is the inclusion of features that
exhibit seasonal behavior, such as the usage of the same month previous year.
Figure 5 shows the methodologies used in the long-term prediction studies in
this review. 11 of the 30 long-term papers (37%) did not address seasonality in
the data [28, 23, 63, 64, 8, 22, 42, 48, 50, 3, 30],while the remaining 19 papers used
some means of addressing seasonality in the data [11, 4, 67, 81, 49, 10, 14, 20, 41,
74, 1, 55, 2, 40, 31, 29, 5, 19, 80].
In the papers that addressed seasonality, four unique approaches were identi-
fied, and some were combined with others. The first approach involves including
features from lag T − 12 (same month previous year) in the feature set used
to predict rainfall at month T [49, 10, 14, 20, 41, 74, 1, 55, 2, 40, 31, 29, 5, 19, 80].
A less common approach is to use the index of the current month in the year
(1=January, . . . , 12=December) as an input feature [31, 10].
Alternative approaches include performing time series decomposition, either
using singular spectrum analysis as in [11] or wavelet transformation as in [4, 81,
67]. One paper [10] combined time series decomposition using singular spectrum
analysis with the inclusion of features from lag T − 12 in the feature set. This
has been included in the segment labelled “Combination” in Figure 5.
The final approach used to address seasonality takes the form of data de-
seasonalization by subtracting the monthly averages from the data as in [19,
49]. All of the papers in this review that used this approach combined this
subtraction with the first approach i.e. including features from lag T − 12 in the
feature set. These two papers have also been included in the segment labelled
“Combination” in Figure 5.

Fig. 5. Methods used to account for seasonality in studies with long-term data, by
percentage.
6.3 Data Preparation for Classification

When attempting to carry out classification into discrete classes, it is either


necessary to use a data set in which the desired output variable is discrete, or to
convert a desired continuous-valued output variable into discrete classes. This
involves setting the desired number of classes, which is usually done manually and
arbitrarily, followed by determining the range of values represented by each class
i.e. determining the thresholds that divide the continuous scale into the desired
classes. Finally, where the number of instances across classes is imbalanced, it is
necessary to balance them.
In the papers in this survey that carried out classification, most made use of
data that was continuous, yet very few provide details on the process used to
convert from a continuous to a discrete scale. Select studies in this survey that
provide information about their data preparation process are described below.
In converting from continuous to discrete data, after manually specifying the
number of classes (which has been explained in Section 4), studies automate the
selection of the class thresholds using clustering tools, specifically k-means and
k-medioids [16, 50, 3]. Another approach taken is to manually determine suitable
thresholds, by performing a series of experiments to compare various threshold
values [70]. To address any resulting class imbalances, researchers may perform
random down-sampling to obtain an equal sample distribution across classes as
in [3, 56, 47].

7 Machine Learning Techniques Used

The studies in this survey made use of a wide range of ML techniques which can
be subdivided into two main groups: “classical” techniques such as multivariate
linear regression (MLR), KNN ANNs, SVMs, and RF; and modern deep learning
methods such as CNNs and Long-Short-Term-Memory (LSTM). It was observed
that classical ML models tended to work with 1D data from meteorological
stations, such as in [46, 28, 23, 4, 62, 82, 63] for short-term data and [28, 20, 41,
63, 1, 67, 49, 22, 48, 30, 3] for long-term data.
Some papers use hybrid models that combine two or more approaches. A
popular hybrid approach is to combine ML with optimization tools such as
genetics and particle swarm optimization to optimize hyper-parameters [48, 10,
49, 26, 27]. Multiple ML techniques are combined in [19, 72, 61], and ML is used
with ARIMA in [62].
Deep learning models usually requires huge datasets to avoid overfitting on
the data, which explains their popularity among short term data sets, especially
those using 2D data [69, 73, 37, 65, 6, 18, 76, 68, 15, 79, 77, 70, 57, 26, 84, 16, 60, 87,
12, 13]. 2D data in particular has a huge feature space, which requires authors
to implement automated feature reduction models like CNNs [84, 16, 60, 87, 12].
In order to accommodate the time dimension in the data,many researchers
try to adapt time series models such as LSTMs for 1D data in [40, 29, 81, 5,
14, 85, 60]. For 2D data, models combining CNNs with LSTMs (designated as
ConvLSTMs models) were first used in in[69] in 2015, and subsequently several
variations have been implemented [73, 37, 65, 6, 18, 76, 68, 15, 79, 77, 70].

8 Reporting of Results and Accuracy Measures

Several different metrics are used in the literature to measure the performance
of the ML models according to the type of the problem. In classification prob-
lems, authors tend use metrics such as precision, recall, and accuracy [30, 50, 3,
83, 25, 72, 56, 17, 47, 39, 33, 73, 84, 16, 60, 87, 12]. If the data is not balanced then
f1-score is used rather than the accuracy, since accuracy does not take the imbal-
ance between the classes into account [83, 25, 72, 56, 73, 60]. For sequence clas-
sification prediction, other metrics are used such as the critical success (CSI)
[65, 6, 18, 76, 68, 77, 70]. For continuous outputs, then the mean absolute error,
and the root mean squared error are the most commonly used metrics in the lit-
erature [61, 62, 78, 85, 86, 82, 28, 23, 20, 41, 63, 74, 4, 1, 64, 67, 8, 49, 22, 42, 10, 55, 2,
40, 31, 29, 81, 5, 14, 11, 19, 48]
A direct comparison of these results across different papers is a nearly im-
possible task,since each paper uses its own models, pre-processing, metrics, data
sets and parameters. However, individual authors frequently compare multiple
algorithms, and there are a few ML algorithms that stand out as being most
frequently mentioned as better performers. ANNs and deep learning are most
frequently mentioned as best performing models, for both long-term prediction
[28, 41, 4, 1, 64, 8, 2, 80, 5, 40, 31, 29, 14, 19, 48] and especially for short-term pre-
diction [69, 73, 37, 65, 6, 18, 76, 68, 15, 79, 77, 70, 57, 26, 84, 16, 60, 87, 12, 13, 85, 86,
25, 39].
Other algorithms mentioned as best performers are SVMs in 6 studies [82, 47,
17, 27, 67, 49]. ensemble in [78, 83, 63, 55, 81, 3], logistic regression in three studies
[56, 7, 30] and KNNs in two studies [33, 20].

9 Discussion

The above sections clearly demonstrate that there is a robust, growing literature
on rainfall prediction, which covers an extremely wide variety of time-scales,
features used, pre-processing techniques, and ML algorithms used. From a high-
level perspective, the field can be divided into short versus long time scales (time
intervals of a one day or less, versus intervals of a month or more), which tend
to have divergent characteristics.
Short term studies typically rely on huge datasets, and require deep learning
applied to large feature sets to find hidden patterns in those datasets. On the
other hand, long term studies rely more on pre-processing methods such as
feature selection, data imputation, and data balancing in order to make effective
predictions. ANNs and deep learning seem are becoming increasingly prevalent
in long term studies as well as short term: since 2018, 7 of 23 papers on long-term
prediction utilized deep learning tools.
There are reasons to regard the trend towards more complicated models with
skepticism. Some recent studies have shown that much simpler models such as
knns can sometimes outperform advanced ML techniques like RNNs, [43, 35, 45,
20, 36]. Similar findings have been reported for other ML applications, such as
the top n recommendation problem [21].
These results underscore the importance of providing simple but statistically
well-motivated baselines to verify whether ML truly is effective in improving
predictive accuracy. However, many papers do not provide simple baselines, but
rather compare several variations or architectures of more advanced ML methods
such as SVR or MLP [51, 11, 48, 14, 61, 17, 27, 47, 26, 84, 60, 12, 13]. Of the total
reviewed papers, almost half (48.2%) of papers did not supply simple baselines.
Of those papers that did supply baselines, a variety of methods was used. For
short-term image data, the previous image is frequently used as an untrained
predictor for the next image [76, 68, 77, 70]. For monthly data, some papers use
MLR based on multiple previous lags [19, 28, 20, 41]; while same-month averages,
though statistically well-motivated, are used much less frequently [81].
Besides the issue of baselining, the use of error bars is essential for comparison
purposes, as it highlights whether the improvement obtained by the models are
significant. Unfortunately most of the literature in ML does not provide error
bars around the measured metrics. In the case of our reviewed literature shows
that 88% of the papers did not give error bars.
A final issue of concern is data leakage. Data leakage refers to allowing data
from the testing set to influence the training set. Data leakage occurs during the
pre-processing of the data, and can take various forms as follows:
– Random shuffling, which involves choosing sequences from a common data
pool for both training and testing:
– Imputation, which involves filling missing records using statistical methods
on the entire data set (including both training and testing)
– De-seasonalization which utilizes the monthly averages from the entire data
set.
– Using current lags, e.g. using temperature at a time T to predict rainfall
at the same time T . (Depending on the application, this may or may not
constitute data leakage)
– Combination: Which uses two of the above mentioned techniques.
Figure 6, shows the reviewed papers in terms of data leakage. The top chart
focuses on long term data, where the bottom focuses on short term data. We
mentioned previously that long term data often undergoes more pre-processing
than short term data. This reflects on the graph, as leakage-producing methods
are more than twice as common for long term as for short term. Random shuffling
was performed in [28, 41, 42, 10, 29, 30, 50, 3] for long term data,and in [78, 27, 47,
57, 26, 16] for short term data. Data imputation was performed in [74, 31, 14, 11]
for long term data, and in [56] for short term data. Faulty de-seasonalization
was carried out in [49] for long term data. Using the current lags was seen only
implemented in [63]. Multiple leakage issues (denoted as “combination” in the
figure) were observed in [19, 83].
Fig. 6. Percentage of papers which introduced data leakage during pre-processing, for
long term data (top) and short term data (bottom).
10 Conclusions

In the area of rainfall prediction, 66 relevant papers are reviewed, by examining


the data source, output objective, input feature, pre-processing methods, models
used, and finally the results. Different pre-processing like random shuffling used
in the literature suggests that in some cases model performance is inaccurately
represented. The aim of the survey is to make researches aware of the different
pitfalls that can leads to unreal models performance, which does not only apply
for rainfall, but for other time series data.

A Appendix: List of abbreviations


– ML Machine learning
– AD Author defined
– ANNs Artificial neural networks
– CNNs Convolution neural networks
– LSTMs Long short term memory
– ConvLSTMs Convolutions layers with Long short term memory
– RF Random forest
– SVMs Support vector machines
– DT Decision tress
– XGB Extreme gradient boosting
– LogReg Logistic regression
– MLR Multi linear regression
– KNNs K-nearest neighbour
– RMSE Root mean square error
– MAE mean absolute error
– CA Classification accuracy
– pre precision
– f1 f1-score
– PACF Partial autocorrelation function
– ACF Autocorrelation function
– PCA principle component analysis
– NOAA National Oceanic and Atmospheric Administration
B Appendix: Summary Tables for References

This appendix contains four tables which summarize the findings for the reviewed papers for long term data 1, 2, and short
term data 3, 4. Tables 1 and 3 contain information regarding the source, period, region, input, output; while Tables 2 and4
include information about the pre-processing tools, data leakage, and the ML used.

No. Source Period Region Input Output Ref


1 China Meteorological Administration 1916-2015 China 6 climatic indices Seasonal regression [28]
(CMA)
2 Indian Institute of Tropical Meteorology 1817-2016 India 8 past lags Seasonal regression [23]
(IITM)
3 Romanian rainfall 1991-2015 Romania 12 past lags Monthly regression [20]
4 Rainfall from the India Water Portal 1901-2002 India 11 climatic parameters Monthly regression [41]
5 Tuticorin meteorological station 1980-2002 India Four climatic parameters Monthly regression [63]
6 Malaysian Department of Irrigation and 1965-2015 Malaysia 10 past lags Monthly regression [74]
Drainage
7 National Cartographic Center of Iran (NCC) 1996-2010 Iran Four climatic parameters Monthly regression [4]
8 Royal Netherlands Meteorological Institute 2004-2014 Australia Seven climatic indices Monthly regression [1]
Climate Explorer
9 Indian water portal 1901-2000 India four climatic parameters Monthly regression [64]
10 Serbian meteorological stations 1946-2012 Serbia past rainfall lags Monthly regression [67]
11 Iran meteorological department 2000-2010 Iran Two Climatic parameters Monthly regression [8]
12 Iran meteorological department 1990-2014 Iran four past lags Monthly regression [49]
13 CHIRPS, and NCEP-NCAR Reanalysis 1918-2001 Indus basin 5 climatic features Monthly regression [22]
14 World AgroMeteorological Information Ser- 1966-2017 South Korea 11 climatic indices Monthly regression [42]
vice (WAMIS) and NOAA
15 Malaysian Department of Irrigation and 1950-2010 Malaysia 6 past lags and time stamp Monthly regression [10]
Drainage
16 Turkish stations 2007-2016 Turkey 3 rainfall lags Monthly regression [55]
17 Australian stations 1885-2014 Australia 10 climatic indices and parame- Monthly regression [2]
ters
18 Indian Meteorological Department 1871-2016 India 12 past lags Monthly regression [40]
19 Bureau of Meteorology (BOM), Royal 1908-2012 Australia 43 climatic indices and parame- Monthly regression [31]
Netherlands Meteorological Institute Cli- ters
mate, more
20 Vietnam’s hydrological gauging 1971-2010 Vietnam 12 features Monthly regression [29]
21 Global Precipitation Climatology Center 1901-2013 China 6-9 climatic indices and param- Monthly regression [81]
(GPCC) eters
22 Precipitation from NCEP 1979-2018 GLOBAL 164 past lags Monthly regression [5]
23 National Center of Hydrology and Meteorol- 1997-2015 Bhutan 6 climates parameters Monthly regression [14]
ogy Department (NCHM)
24 Taiwan Water Resource Bureau 1958–2018 Taiwan 3 past lag Monthly regression [11]
25 Instituto de Hidrologı́a, Meterologı́a y Estu- 1983-2016 Colombia 6 past lags Monthly regression [19]
dios Ambientales (IDEAM) of Colombia
26 Islamic Republic of Iran Meteorological Or- 1981- 2012 Iran 5 past lags Monthly regression [48]
ganization (IRIMO)
27 Pluak Daeng Station in Thailand 1991-2016 Thailand 346 climatic indices and param- Monthly regression [80]
eters
28 National Climate Center of China Meteoro- 1952-2012 China 84 climatic indices Yearly Classification
[30]
logical Administration (NCC-CMA)
29 The Department of Agricultural Meteorol- 2011-2013 India five climatic parameters Monthly Classification [50]
ogy Indira
30 meteorological stations of the island of 1976 - 2016 Tenerife Is- 12 climatic indices and parame- Monthly Classification [3]
Tenerife and NOAA databases land ters
Table 1: Data sources, spatio-temporal coverage, inputs and out-
puts, and references for long-term predictive studies
No. Pre-processing Data leakage ML used Ref
1 Normalization, Random shuffling, feature Random shuffling PCA-ANN, PCA-MLR [28]
correlation
2 Normalization no Knns, ANNs, ELM [23]
3 Windowing no Knns, ARIMA, ANNs [20]
4 windowing, random shuffling Random shuffling ANN, ARMA, LR [41]
5 Data imputation, noise removal, correlation Using current lags DT, ANNs [63]
analysis
6 Normalization, and data imputation Imputation ANNs, ARIMA [74]
7 Normalization, Decomposition no WTANN, ANNs [4]
8 features correlation no ANNs, POAMA [1]
9 Normalization no Different ANNs [64]
10 N/A no ANN, WT-SVM, GP [67]
11 Normalization, optimization no AD-MLP, AD-SVM, DT [8]
12 correlation analysis (PACF), square De-seasonalization SVR, AD-SVR, more [49]
root transformation, standardization,
de-seasonalization
13 feature correlation, random shuffling no MLP, SVR, MLR, RF, Knns [22]
14 feature correlation, random shuffling Random shuffling ANNs [42]
15 Decomposition Random shuffling AD-MLP [10]
16 normalization no Ensemble method, SVM, [55]
ANNS, more
17 Feature selection no ANNs, POAMA [2]
18 Feature correlation, windowing N/A LSTM, RNN [40]
19 Data imputation, normalization Imputation 1D-CNN, MLP, baseline [31]
(ACCESS-S1)
20 random shuffling Random shuffling MLP, LSTM, SNN [29]
21 Normalization, Wavelet no MLR, MLP, LSTM, SVMs, [81]
ConvLSTMs, ensemble meth-
ods
22 greyscale, windowing no LSTM, ConvNet [5]
23 Normalization, data imputation Imputation MLR , AD-LSTM, LSTM, MLP [14]
24 Decomposition Imputation UD-RF, RF, UD-SVR, SVR [11]
25 Imputation, de-seasonlization Imputation, de- 3 AD-ANNs models [19]
seasonalization
26 Normalization no ANNs, AD-ANNs, AD-gene ex- [48]
pression programming
27 N/A no DNNs [80]
28 feature correlation, feature reduction Random shuffling MLogR [30]
29 Clustering Random shuffling GPR, DT, NB [50]
30 Random Down sampling, feature correlation Random shuffling XGB, RF, more [3]
Table 2: Pre-processing, data leakage characteristics, machine
learning algorithms used, and reference numbers for long term pre-
dictive studies
C

No. Source Period Region Input Output Ref


1 Indian Statistical Institute 1989-1995 Multiple re- 10 climatic parameters Daily Regression [61]
gions
2 Vietnamese stations 1978-2016 Vietnam previous lags Daily Regression [62]
3 Meteoblue Data , MODIS, and more 2012-2014 Colombia 12 climatic indices and parame- Hourly Regression [78]
ters
4 Central Meteorological Observatory of 2015-2017 China 24 climatic parameters Hourly Regression [85]
Shanghai
5 China Meteorological Administration 2015-2017 China 13 climatic parameters Hourly Regression [86]
6 Taiwan and the National Severe Storms Lab- 2012-2015 Taiwan 3-4 parameters Hourly Regression [82]
oratory and NOAA
7 Meteorological Drainage and the Irrigation 2010-2014. Malaysia 4 parameters Daily Classification [83]
departments in Malaysia
8 The water planing and managing ageny for 1979-2015 Spain 1800 parameters Daily Classification [25]
Tenerife Island, and NOAA
9 U.S. Government’s open data 2010-2017 US 25 parameters Daily Classification [72]
10 Kaggle and the australian government 2008-2017 Australia 23 parameters Daily Classification [56]
11 Indian Meteorological Department 2008-2017 India 8 parameters Daily Classification [7]
12 satellite imagery data are from FY-2G, and 2015 China 8 parameters Hourly Classification [17]
meteorological station located in Shenzhen
13 Data from the Nanjing Station N/A China 6 parameters Hourly Classification [27]
14 Singapore related weather stations 2012-2015 Singapore 15 climatic parameters Min Classification [47]
15 Japan Meteorological Agency 2000-2012 Japan 8 features Min Classification [39]
16 NCEP-NCAR and Beijing Meteorological 1990-2012 China 6 climatic indices and parame- Daily Classification [33]
station ters
17 Radar images collected in Hong Kong 2011-2013 Hong Kong 5 frames Min Classification [69]
18 Radar images from USA from 2008-2015 2008-2015 US 10 frames Min Classification [73]
19 Radar images from National Meteorological 2016-2017 China 10 frames Min Classification [37]
Information Center
20 Radar images are retrieved using Yahoo! 2013-2017 Japan 10 frames Min Classification [65]
Static Map API
21 Radar images from the German Weather 2006-2017 Germany 2 frames Min Both [6]
Service (DWD)
22 Weather Surveillance Radar-1988 Doppler 2015-2018 China 20 frames Min Classification [18]
Radar (WSR-88D)
23 CIKM AnalytiCup 2017 competition N/A China 5 frames Min Regression [76]
24 CINRAD-SA type Doppler weather radar 2016 China 4 frames Min Classification [68]
25 CHIRPS 1918-2019 China 5 frames Daily Regression [15]
26 Radar images collected in Hong Kong 2011-2013 China 10 frames Min Regression [79]
27 CIKM AnalytiCup 2017 competition N/A China 7 frames Min Both [77]
28 dataset from HKO-7 2009-2015 Hong Kong 5 frames Min Regression [70]
29 NCEP, and NOAA 1979-2017 US A tensor of 8 × 4 × 25 × 25 Daily Regression [57]
30 China meteorological data network N/A China 7 climatic parameters Hourly Regression [26]
31 NOAA 1800-2017 US 30 climatic parameters Hourly Both [84]
32 Large Ensemble (LENS) community project 1920-2005 US 3 × 28 × 28 × 3 Hourly Classification [16]
33 Kaggle 2012-2017 US and In- 120 climatic lags Hourly Classification [60]
dia
34 Iowa state 1948-2010 USA 9 climatic parameters Daily Classification [87]
35 Meteorological Department of Thailand and 2017-2017 Thailand one image Daily Classification [12]
the Petroleum Authority of Thailand
36 Meteorological Department of Thailand and 2017-2018 Thailand one and batch of images Daily Classification [13]
the Petroleum Authority of Thailand
Table 3: Data sources, spatio-temporal coverage, inputs and out-
puts, and references for short-term predictive studies
No. Pre-processing Data leakage ML used Ref
1 normalization, cross validation, feature re- no AD-ELM [61]
duction (PCA)
2 normalization, feature correlation no ARIMA-MLP, ARIMA-SVM, [62]
ARIMA-HW, ARIMA-NF,
more
3 data imputation, data shuffling Random shuffling RF, Cubist [78]
4 feature selection, correlation analysis, Inter- no LSTM, MLR, SVMs, ECM- [85]
polation, clustering FWF
5 normalization, feature reduction (PCA) no DRCF, ARIMA, more [86]
6 N/A no RF, SVM [82]
7 Normalization, data imputation, shuffling Data imputation, Ran- SVM, RF, DT, NB, ANN [83]
dom shuffling
8 Feature reduction (PCA) no ANNs, RF, Knns, LogR [25]
9 Feature selection (RF), k-fold cross valida- no RF,AD[ ANNs, Adaboost, [72]
tion SVM, KNN ]
10 Feature selection, Feature correlation, data Imputation LogR, DT, Knns, more [56]
imputation, over, and down sampling
11 N/A no LogReg, DT, RF, more [7]
12 Radiometric, and geometric correction, and no SVM [17]
windowing
13 Normalization, random shuffling Random shuffling AD-SVMs [27]
14 Down-sampling, feature correlation Random shuffling SVM [47]
15 outliers removal, normalization no MLP, RBFN [39]
16 Normalization no Knns [33]
17 Feature reduction, noise removal, windowing no ConvLSTM, FC-LSTM, more [69]
18 Resizing, windowing no Eulerian persistence, AD-Conv- [73]
RNN, Conv-LSTM
19 Feature reduction, windowing no MLC-LSTM, ConvLSTM, more [37]
20 Feature reduction, windowing no SDPredNet, TrajGRU, more [65]
21 logarithmic transformation no Optical flow, Dozhdya.Net [6]
22 Noise removal, remove corrupted images, no COTREC, ConvLSTM, AD- [18]
windowing, Normalization ConvLSTM, more
23 Normalization, windowing no Last frame, TrajGRU, ConvL- [76]
STM, AD-TrajGRU, more
24 Windowing, greyscale transformation no Last input, COTREC, AD- [68]
CNN
25 Windowing, grey-scale, resizing no ConvLSTM, AD-ConvLSTMs [15]
26 Windowing, grey-scale, resizing no ConvLSTM, PredRNN, VPN- [79]
baseline
27 Windowing, grey-scale, resizing, data aug- no ConvLSTM, ConvGRU, Traj- [77]
mentation GRU, PredRNN, PredRNN++,
last frame
28 Windowing, grey-scale, noise removal, nor- no 2D CNN, 3D CNN, ConvGRU, [70]
malization TrajGRU, last frame, more
29 Normalization, random shuffling Random shuffling LR, CNNs, base model (NARR) [57]
30 Random shuffling and Normalization Random shuffling DBN, GA-SVM, more [26]
31 N/A no CNN, LPBoost, more [84]
32 clustering, down-sampling, random shuffling Random shuffling CNN, LogReg [16]
33 normalization no CNN, LSTM [60]
34 cropping no CNN [87]
35 cropping N/A CNN [12]
36 cropping N/A CNN [13]
Table 4: Pre-processing, data leakage characteristics, machine
learning algorithms used, and reference numbers for short term
predictive studies
References
1. John Abbot and Jennifer Marohasy. Forecasting monthly rainfall in the western
australian wheat-belt up to 18-months in advance using artificial neural networks.
In Australasian Joint Conference on Artificial Intelligence, pages 71–87. Springer,
2016.
2. John Abbot and Jennifer Marohasy. Application of artificial neural networks to
forecasting monthly rainfall one year in advance for locations within the murray
darling basin, australia. International Journal of Sustainable Development and
Planning, 12(8):1282–1298, 2017.
3. Ricardo Aguasca-Colomo, Dagoberto Castellanos-Nieves, and Máximo Méndez.
Comparative analysis of rainfall prediction models using machine learning in is-
lands with complex orography: Tenerife island. Applied Sciences, 9(22):4931, 2019.
4. Mohammad Arab Amiri, Yazdan Amerian, and Mohammad Saadi Mesgari. Spatial
and temporal monthly precipitation forecasting using wavelet transform and neural
networks, qara-qum catchment, iran. Arabian Journal of Geosciences, 9(5):421,
2016.
5. S Aswin, P Geetha, and R Vinayakumar. Deep learning models for the predic-
tion of rainfall. In 2018 International Conference on Communication and Signal
Processing (ICCSP), pages 0657–0661. IEEE, 2018.
6. G Ayzel, M Heistermann, A Sorokin, O Nikitin, and O Lukyanova. All convolu-
tional neural networks for radar-based precipitation nowcasting. Procedia Com-
puter Science, 150:186–192, 2019.
7. MS Balamurugan and R Manojkumar. Study of short term rain forecasting using
machine learning based approach. Wireless Networks, pages 1–6, 2019.
8. Fatemeh Barzegari Banadkooki, Mohammad Ehteram, Ali Najah Ahmed,
Chow Ming Fai, Haitham Abdulmohsin Afan, Wani M Ridwam, Ahmed Sefelnasr,
and Ahmed El-Shafie. Precipitation forecasting using multilayer neural network
and support vector machine optimization based on flow regime algorithm taking
into account uncertainties of soft computing models. Sustainability, 11(23):6681,
2019.
9. Adrian G Barnett, Peter Baker, and Annette Dobson. Analysing seasonal data. R
Journal, 4(1):5–10, 2012.
10. Zahra Beheshti, Morteza Firouzi, Siti Mariyam Shamsuddin, Masoumeh Zibarzani,
and Zulkifli Yusop. A new rainfall forecasting model using the capso algorithm and
an artificial neural network. Neural Computing and Applications, 27(8):2551–2565,
2016.
11. Pa Ousman Bojang, Tao-Chang Yang, Quoc Bao Pham, and Pao-Shan Yu. Linking
singular spectrum analysis and machine learning for monthly rainfall forecasting.
Applied Sciences, 10(9):3224, 2020.
12. Kitinan Boonyuen, Phisan Kaewprapha, and Patchanok Srivihok. Daily rainfall
forecast model from satellite image using convolution neural network. In 2018
IEEE International Conference on Information Technology, pages 1–7, 2018.
13. Kitinan Boonyuen, Phisan Kaewprapha, Uruya Weesakul, and Patchanok Srivi-
hok. Convolutional neural network inception-v3: A machine learning approach for
leveling short-range rainfall forecast model from satellite image. In International
Conference on Swarm Intelligence, pages 105–115. Springer, 2019.
14. Teresita Canchala, Wilfredo Alfonso-Morales, Yesid Carvajal-Escobar, Wilmar L
Cerón, and Eduardo Caicedo-Bravo. Monthly rainfall anomalies forecasting
for southwestern colombia using artificial neural networks approaches. Water,
12(9):2628, 2020.
15. Rafaela Castro, Yania M Souto, Eduardo Ogasawara, Fabio Porto, and Eduardo
Bezerra. Stconvs2s: Spatiotemporal convolutional sequence to sequence network
for weather forecasting. Neurocomputing, 2020.
16. Ashesh Chattopadhyay, Pedram Hassanzadeh, and Saba Pasha. Predicting clus-
tered weather patterns: A test case for applications of convolutional neural net-
works to spatio-temporal climate data. Scientific Reports, 10(1):1–13, 2020.
17. Kai Chen, Jun Liu, Shanxin Guo, Jinsong Chen, Ping Liu, Jing Qian, Huijuan
Chen, and Bo Sun. Short-term precipitation occurrence prediction for strong con-
vective weather using fy2-g satellite data: a case study of shenzhen, south china.
The International Archives of Photogrammetry, Remote Sensing and Spatial In-
formation Sciences, 41:215, 2016.
18. Lei Chen, Yuan Cao, Leiming Ma, and Junping Zhang. A deep learning based
methodology for precipitation nowcasting with radar. Earth and Space Science,
page e2019EA000812, 2020.
19. Manoj Chhetri, Sudhanshu Kumar, Partha Pratim Roy, and Byung-Gyu Kim.
Deep blstm-gru model for monthly rainfall prediction: A case study of simtokha,
bhutan. Remote Sensing, 12(19):3174, 2020.
20. Marinoiu Cristian et al. Average monthly rainfall forecast in romania by using k-
nearest neighbors regression. Analele Universităţii Constantin Brâncuşi din Târgu
Jiu: Seria Economie, 1(4):5–12, 2018.
21. Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really
making much progress? a worrying analysis of recent neural recommendation ap-
proaches. In Proceedings of the 13th ACM Conference on Recommender Systems,
pages 101–109, 2019.
22. Hamidreza Ghasemi Damavandi and Reepal Shah. A learning framework for an
accurate prediction of rainfall rates. arXiv preprint arXiv:1901.05885, 2019.
23. Yajnaseni Dash, Saroj K Mishra, and Bijaya K Panigrahi. Rainfall prediction
for the kerala state of india using artificial intelligence approaches. Computers &
Electrical Engineering, 70:66–73, 2018.
24. Jacques W Delleur and M Levent Kavvas. Stochastic models for monthly rainfall
forecasting and synthetic generation. Journal of Applied Meteorology, 17(10):1528–
1536, 1978.
25. Javier Diez-Sierra and Manuel del Jesus. Long-term rainfall prediction using at-
mospheric synoptic patterns in semi-arid climates with statistical and machine
learning methods. Journal of Hydrology, page 124789, 2020.
26. Jinglin Du, Yayun Liu, and Zhijun Liu. Study of precipitation forecast based on
deep belief networks. Algorithms, 11(9):132, 2018.
27. Jinglin Du, Yayun Liu, Yanan Yu, and Weilan Yan. A prediction of precipitation
data based on support vector machine and particle swarm optimization (pso-svm)
algorithms. Algorithms, 10(2):57, 2017.
28. Yiheng Du, Ronny Berndtsson, Dong An, Linus Zhang, Feifei Yuan, Cintia Bertac-
chi Uvo, and Zhenchun Hao. Multi-space seasonal precipitation prediction model
applied to the source region of the yangtze river, china. Water, 11(12):2440, 2019.
29. Tran Anh Duong, Minh Duc Bui, and Peter Rutschmann. A comparative
study of three different models to predict monthly rainfall in ca mau, viet-
nam. In Wasserbau-Symposium Graz 2018. Wasserwirtschaft–Innovation aus Tra-
dition. Tagungsband. Beiträge zum 19. Gemeinschafts-Symposium der Wasserbau-
Institute TU München, TU Graz und ETH Zürich, pages Paper–G5, 2018.
30. Lihao Gao, Fengying Wei, Zhongwei Yan, Jin Ma, and Jiangjiang Xia. A study
of objective prediction for summer precipitation patterns over eastern china based
on a multinomial logistic regression model. Atmosphere, 10(4):213, 2019.
31. Ali Haidar and Brijesh Verma. Monthly rainfall forecasting using one-dimensional
deep convolutional neural network. IEEE Access, 6:69053–69063, 2018.
32. Kyaw Kyaw Htike and Othman O Khalifa. Rainfall forecasting models using fo-
cused time-delay neural networks. In International Conference on Computer and
Communication Engineering (ICCCE’10), pages 1–6. IEEE, 2010.
33. Mingming Huang, Runsheng Lin, Shuai Huang, and Tengfei Xing. A novel ap-
proach for precipitation forecast via improved k-nearest neighbor algorithm. Ad-
vanced Engineering Informatics, 33:89–95, 2017.
34. Nguyen Q Hung, Mukand S Babel, S Weesakul, and NK Tripathi. An artificial
neural network model for rainfall forecasting in bangkok, thailand. Hydrology and
Earth System Sciences, 13(8):1413–1425, 2009.
35. Eslam Hussein, Mehrdad Ghaziasgar, and Christopher Thron. Regional rainfall
prediction using support vector machine classification of large-scale precipitation
maps. In 2020 IEEE 23rd International Conference on Information Fusion (FU-
SION), pages 1–8. IEEE, 2020.
36. Eslam A. Hussein, Mehrdad Ghaziasgar, Christopher Thron, Mattia Vaccari, and
Antoine Bagula. Basic statistical estimation outperforms machine learning in
monthly prediction of seasonal climatic parameters. Atmosphere, 12(5), 2021.
37. Jinrui Jing, Qian Li, and Xuan Peng. Mlc-lstm: Exploiting the spatiotemporal cor-
relation between multi-level weather radar echoes for echo sequence extrapolation.
Sensors, 19(18):3988, 2019.
38. Hassan A Karimi. Big Data: techniques and technologies in geoinformatics. Crc
Press, 2014.
39. Tomoaki Kashiwao, Koichi Nakayama, Shin Ando, Kenji Ikeda, Moonyong Lee,
and Alireza Bahadori. A neural network-based local rainfall prediction system
using meteorological data on the internet: A case study using data from the japan
meteorological agency. Applied Soft Computing, 56:317–330, 2017.
40. Deepak Kumar, Anshuman Singh, Pijush Samui, and Rishi Kumar Jha. Forecasting
monthly precipitation using sequential modelling. Hydrological sciences journal,
64(6):690–700, 2019.
41. K Lakshmaiah, S Murali Krishna, and B Eswara Reddy. Application of referen-
tial ensemble learning techniques to predict the density of rainfall. In 2016 In-
ternational Conference on Electrical, Electronics, Communication, Computer and
Optimization Techniques (ICEECCOT), pages 233–237. IEEE, 2016.
42. Jeongwoo Lee, Chul-Gyum Kim, Jeong Eun Lee, Nam Won Kim, and Hyeonjun
Kim. Application of artificial neural networks to rainfall forecasting in the geum
river basin, korea. Water, 10(10):1448, 2018.
43. Jimmy Lin. The neural hype and comparisons against weak baselines. In ACM
SIGIR Forum, volume 52, pages 40–51. ACM New York, NY, USA, 2019.
44. Jing Lu, Wei Hu, and Xiakun Zhang. Precipitation data assimilation system based
on a neural network and case-based reasoning system. Information, 9(5):106, 2018.
45. Malte Ludewig and Dietmar Jannach. Evaluation of session-based recommendation
algorithms. User Modeling and User-Adapted Interaction, 28(4-5):331–390, 2018.
46. M Mallika and M Nirmala. Chennai annual rainfall prediction using k-nearest
neighbour technique. International Journal of Pure and Applied Mathematics,
109(8):115–120, 2016.
47. Shilpa Manandhar, Soumyabrata Dev, Yee Hui Lee, Yu Song Meng, and Stefan
Winkler. A data-driven approach for accurate rainfall prediction. IEEE Transac-
tions on Geoscience and Remote Sensing, 57(11):9323–9331, 2019.
48. Saeid Mehdizadeh, Javad Behmanesh, and Keivan Khalili. New approaches for
estimation of monthly rainfall based on gep-arch and ann-arch hybrid models.
Water resources management, 32(2):527–545, 2018.
49. A Danandeh Mehr, Vahid Nourani, V Karimi Khosrowshahi, and Moahmmad Ali
Ghorbani. A hybrid support vector regression–firefly model for monthly rain-
fall forecasting. International Journal of Environmental Science and Technology,
16(1):335–346, 2019.
50. Niharika Mishra and Ajay Kushwaha. Rainfall prediction using gaussian process
regression classifier. International Journal of Advanced Research in Computer En-
gineering & Technology (IJARCET), 8(8), 2019.
51. S Mohamadi, M Ehteram, and A El-Shafie. Accuracy enhancement for monthly
evaporation predicting model utilizing evolutionary machine learning methods. In-
ternational Journal of Environmental Science and Technology, pages 1–24, 2020.
52. Amir Mosavi, Pinar Ozturk, and Kwok-wing Chau. Flood prediction using machine
learning models: Literature review. Water, 10(11):1536, 2018.
53. Mohsen Nasseri, Keyvan Asghari, and MJ Abedini. Optimized scenario for rainfall
forecasting using genetic algorithm coupled with artificial neural network. Expert
systems with applications, 35(3):1415–1421, 2008.
54. Aileen Nielsen. Practical time series analysis: prediction with statistics and ma-
chine learning. O’Reilly, 2020.
55. Vahid Nourani, Selin Uzelaltinbulat, Fahreddin Sadikoglu, and Nazanin Behfar.
Artificial intelligence based ensemble modeling for multi-station prediction of pre-
cipitation. Atmosphere, 10(2):80, 2019.
56. Nikhil Oswal. Predicting rainfall using machine learning techniques. arXiv preprint
arXiv:1910.13827, 2019.
57. Baoxiang Pan, Kuolin Hsu, Amir AghaKouchak, and Soroosh Sorooshian. Improv-
ing precipitation estimation using convolutional neural network. Water Resources
Research, 55(3):2301–2321, 2019.
58. Adam Pantanowitz and Tshilidzi Marwala. Missing data imputation through the
use of the random forest algorithm. In Advances in Computational Intelligence,
pages 53–62. Springer, 2009.
59. Aakash Parmar, Kinjal Mistree, and Mithila Sompura. Machine learning techniques
for rainfall prediction: A review. In International Conference on Innovations in
information Embedded and Communication Systems, 2017.
60. Maitreya Patel, Anery Patel, Dr Ghosh, et al. Precipitation nowcasting: Leveraging
bidirectional lstm and 1d cnn. arXiv preprint arXiv:1810.10485, 2018.
61. Yuzhong Peng, Huasheng Zhao, Hao Zhang, Wenwei Li, Xiao Qin, Jianping Liao,
Zhiping Liu, and Jie Li. An extreme learning machine and gene expression
programming-based hybrid model for daily precipitation prediction. International
Journal of Computational Intelligence Systems, 12(2):1512–1525, 2019.
62. Quoc Bao Pham, Sani Isah Abba, Abdullahi Garba Usman, Nguyen Thi Thuy
Linh, Vivek Gupta, Anurag Malik, Romulus Costache, Ngoc Duong Vo, and
Doan Quang Tri. Potential of hybrid data-intelligence algorithms for multi-station
modelling of rainfall. Water Resources Management, 33(15):5067–5087, 2019.
63. N Ramsundram, S Sathya, and S Karthikeyan. Comparison of decision tree based
rainfall prediction model with data driven model considering climatic variables.
Irrigation and Drainage Systems Engineering, 2016.
64. Kaushik D Sardeshpande and Vijaya R Thool. Rainfall prediction: A comparative
study of neural network architectures. In Emerging Technologies in Data Mining
and Information Security, pages 19–28. Springer, 2019.
65. Ryoma Sato, Hisashi Kashima, and Takehiro Yamamoto. Short-term precipitation
prediction with skip-connected prednet. In International Conference on Artificial
Neural Networks, pages 373–382. Springer, 2018.
66. Anoop D Shah, Jonathan W Bartlett, James Carpenter, Owen Nicholas, and Harry
Hemingway. Comparison of random forest and parametric imputation models for
imputing missing data using mice: a caliber study. American journal of epidemi-
ology, 179(6):764–774, 2014.
67. Mohamed Shenify, Amir Seyed Danesh, Milan Gocić, Ros Surya Taher, Ainuddin
Wahid Abdul Wahab, Abdullah Gani, Shahaboddin Shamshirband, and Dalibor
Petković. Precipitation estimation using support vector machine with discrete
wavelet transform. Water resources management, 30(2):641–652, 2016.
68. En Shi, Qian Li, Daquan Gu, and Zhangming Zhao. Convolutional neural networks
applied on weather radar echo extrapolation. DEStech Transactions on Computer
Science and Engineering, (csae), 2017.
69. Xingjian Shi, Zhourong Chen, Hao Wang, D. Yeung, W. Wong, and Wang chun
Woo. Convolutional lstm network: A machine learning approach for precipitation
nowcasting. ArXiv, abs/1506.04214, 2015.
70. Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung, Wai-kin
Wong, and Wang-chun Woo. Deep learning for precipitation nowcasting: A bench-
mark and a new model. In Advances in neural information processing systems,
pages 5617–5627, 2017.
71. Xingjian Shi and Dit-Yan Yeung. Machine learning for spatiotemporal sequence
forecasting: A survey. arXiv preprint arXiv:1808.06865, 2018.
72. Gurpreet Singh and Deepak Kumar. Hybrid prediction models for rainfall fore-
casting. In 2019 9th International Conference on Cloud Computing, Data Science
& Engineering (Confluence), pages 392–396. IEEE, 2019.
73. Sonam Singh, Sudeshna Sarkar, and Pabitra Mitra. Leveraging convolutions in re-
current neural networks for doppler weather radar echo prediction. In International
Symposium on Neural Networks, pages 310–317. Springer, 2017.
74. Junaida Sulaiman and Siti Hajar Wahab. Heavy rainfall forecasting model using
artificial neural network for flood prone area. In IT Convergence and Security
2017, pages 68–76. Springer, 2018.
75. Fei Tang and Hemant Ishwaran. Random forest missing data algorithms. Statistical
Analysis and Data Mining: The ASA Data Science Journal, 10(6):363–377, 2017.
76. Quang-Khai Tran and Sa-kwang Song. Computer vision in precipitation nowcast-
ing: Applying image quality assessment metrics for training deep neural networks.
Atmosphere, 10(5):244, 2019.
77. Quang-Khai Tran and Sa-kwang Song. Multi-channel weather radar echo extrapo-
lation with convolutional recurrent neural networks. Remote Sensing, 11(19):2303,
2019.
78. Cristian Valencia-Payan and Juan Carlos Corrales. A rainfall prediction tool for
sustainable agriculture using random forest. In Mexican International Conference
on Artificial Intelligence, pages 315–326. Springer, 2018.
79. Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, and S Yu Philip.
Predrnn: Recurrent neural networks for predictive learning using spatiotemporal
lstms. In Advances in Neural Information Processing Systems, pages 879–888,
2017.
80. Uruya Weesakul, Phisan Kaewprapha, Kitinan Boonyuen, and Ole Mark. Deep
learning neural network: A machine learning approach for monthly rainfall forecast,
case study in eastern region of thailand. Engineering and Applied Science Research,
45(3):203–211, 2018.
81. Lei Xu, Nengcheng Chen, Xiang Zhang, and Zeqiang Chen. A data-driven multi-
model ensemble for deterministic and probabilistic precipitation forecasting at sea-
sonal scale. Climate Dynamics, pages 1–20, 2020.
82. Pao-Shan Yu, Tao-Chang Yang, Szu-Yin Chen, Chen-Min Kuo, and Hung-Wei
Tseng. Comparison of random forests and support vector machine for real-time
radar-derived rainfall forecasting. Journal of Hydrology, 552:92–104, 2017.
83. Suhaila Zainudin, Dalia Sami Jasim, and Azuraliza Abu Bakar. Comparative anal-
ysis of data mining techniques for malaysian rainfall prediction. International
Journal on Advanced Science, Engineering and Information Technology, 6(6):1148–
1153, 2016.
84. Choujun Zhan, Fujian Wu, Zhengdong Wu, and K Tse Chi. Daily rainfall data
construction and application to weather prediction. In 2019 IEEE International
Symposium on Circuits and Systems (ISCAS), pages 1–5. IEEE, 2019.
85. Chang-Jiang Zhang, Jing Zeng, Hui-Yuan Wang, Lei-Ming Ma, and Hai Chu. Cor-
rection model for rainfall forecasts using the lstm with multiple meteorological
factors. Meteorological Applications, 27(1):e1852, 2020.
86. Pengcheng Zhang, Yangyang Jia, Jerry Gao, Wei Song, and Hareton KN Leung.
Short-term rainfall forecasting using multi-layer perceptron. IEEE Transactions
on Big Data, 2018.
87. WY Zhuang and Wei Ding. Long-lead prediction of extreme precipitation clus-
ter via a spatiotemporal convolutional neural network. In Proceedings of the 6th
International Workshop on Climate Informatics: CI, 2016.

View publication stats

You might also like