Ann Book

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Artificial Neural Networks

Ivan Nunes da Silva Danilo Hernane Spatti


Rogerio Andrade Flauzino


Luisa Helena Bartocci Liboni
Silas Franco dos Reis Alves

Artificial Neural Networks


A Practical Course

123
Ivan Nunes da Silva Luisa Helena Bartocci Liboni
Department of Electrical and Computer Department of Electrical and Computer
Engineering, USP/EESC/SEL Engineering, USP/EESC/SEL
University of São Paulo University of São Paulo
São Carlos, São Paulo São Carlos, São Paulo
Brazil Brazil

Danilo Hernane Spatti Silas Franco dos Reis Alves


Department of Electrical and Computer Department of Electrical and Computer
Engineering, USP/EESC/SEL Engineering, USP/EESC/SEL
University of São Paulo University of São Paulo
São Carlos, São Paulo São Carlos, São Paulo
Brazil Brazil

Rogerio Andrade Flauzino


Department of Electrical and Computer
Engineering, USP/EESC/SEL
University of São Paulo
São Carlos, São Paulo
Brazil

ISBN 978-3-319-43161-1 ISBN 978-3-319-43162-8 (eBook)


DOI 10.1007/978-3-319-43162-8

Library of Congress Control Number: 2016947754

© Springer International Publishing Switzerland 2017


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG Switzerland
To Sileide, Karla, Solange,
Raphael and Elen
Preface

What are artificial neural networks? What is their purpose? What are their potential
practical applications? What kind of problems can they solve?
With these questions in mind, this book was written with the primary concern of
answering readers with different profiles, from those interested in acquiring
knowledge about architectures of artificial neural network to those motivated by its
multiple applications (of practical aspect) for solving real-world problems.
This book audience is multidisciplinary, as it will be confirmed by the numerous
exercises and examples addressed here. It explores different knowledge areas, such
as engineering, computer science, mathematics, physics, economics, finance,
statistics, and neurosciences. Additionally, it is expected that this book could be
interesting for those from many other areas that have been in the focus of artificial
neural networks, such as medicine, psychology, chemistry, pharmaceutical sci-
ences, biology, ecology, geology, and so on.
Regarding the academic approach of this book and its audience, the chapters
were tailored in a fashion that attempts to discuss, step-by-step, the thematic con-
cepts, covering a broad range of technical and theoretical information. Therefore,
besides meeting the professional audience’s desire to begin or deepen their study on
artificial neural networks and its potential applications, this book is intended to be
used as a textbook for undergraduate and graduate courses, which address the
subject of artificial neural networks in their syllabus.
Furthermore, the text was composed using an accessible language so it could be
read by professionals, students, researchers, and autodidactics, as a straightforward
and independent guide for learning basic and advanced subjects related to artificial
neural networks. To this end, the prerequisites for understanding this book’s content
are basic, requiring only a few elementary knowledge about linear algebra, algo-
rithms, and calculus.
The first part of this book (Chaps. 1–10), which is intended for those readers
who want to begin or improve their theoretical investigation on artificial neural
networks, addresses the fundamental architectures that can be implemented in
several application scenarios.

vii
viii Preface

The second part of this book (Chaps. 11–20) was particularly created to present
solutions that comprise artificial neural networks for solving practical problems
from different knowledge areas. It describes several developing details considered
to achieve the described results. Such aspect contributes to mature and improve the
reader’s knowledge about the techniques of specifying the most appropriated
artificial neural network architecture for a given application.

São Carlos, Brazil Ivan Nunes da Silva


Danilo Hernane Spatti
Rogerio Andrade Flauzino
Luisa Helena Bartocci Liboni
Silas Franco dos Reis Alves
Organization

This book was carefully created with the mission of presenting an objective,
friendly, accessible, and illustrative text, whose fundamental concern is, in fact, its
didactic format. The book organization, made in two parts, along with its com-
position filled with more than 200 figures, eases the knowledge building for dif-
ferent readers’ profiles. The bibliography, composed of more than 170 references, is
the foundation for the themes covered in this book. The subjects are also confronted
with up-to-date context.
The first part of the book (Part I), divided into ten chapters, covers the theoretical
features related to the main artificial neural architectures, including Perceptron,
Adaptive Linear Element (ADALINE), Multilayer Perceptron, Radial Basis
Function (RBF), Hopfield, Kohonen, Learning Vector Quantization (LCQ),
Counter-Propagation, and Adaptive Resonance Theory (ART). In each one of these
chapters from Part I, a section with exercises was inserted, so the reader can
progressively evaluate the knowledge acquired within the numerous subjects
addressed in this book.
Furthermore, the chapters that address the different architectures of neural net-
works are also provided with sections that discuss hands-on projects, whose content
assists the reader on experimental aspects concerning the problems that use artificial
neural networks. Such activities also contribute to the development of practical
knowledge, which may help on specifying, parameterizing, and tuning neural
architectures.
The second part (Part II) addresses several applications, covering different
knowledge areas, whose solutions and implementations come from the neural
network architectures explored in Part I. The different applications addressed on this
part of the book reflect the potential applicability of neural networks to the solution
of problems from engineering and applied sciences. These applications intend to
drive the reader through different modeling and mapping strategies based on arti-
ficial neural networks for solving problems of diverse nature.

ix
x Organization

Several didactic materials related to this book, including figures, exercise tips,
and datasets for training neural networks (in table format), are also available at the
following Website:
http://laips.sel.eesc.usp.br/
In summary, this book will hopefully create a pleasant and enjoyable reading,
thus contributing to the development of a wide range of both theoretical and
practical knowledge concerning the area of artificial neural networks.
Acknowledgments

The authors are immensely grateful to the many colleagues who contributed to the
realization of this compilation, providing precious suggestions to help us promptly
on this important and noble work.
In particular, we would like to express our thanks to the following colleagues:
Alexandre C.N. de Oliveira, Anderson da Silva Soares, André L.V. da Silva,
Antonio V. Ortega, Débora M.B.S. de Souza, Edison A. Goes, Ednaldo J. Ferreira,
Eduardo A. Speranza, Fabiana C. Bertoni, José Alfredo C. Ulson, Juliano C.
Miranda, Lucia Valéria R. de Arruda, Marcelo Suetake, Matheus G. Pires,
Michelle M. Mendonça, Ricarco A.L. Rabêlo, Valmir Ziolkowski, Wagner C.
Amaral, Washington L.B. Melo and Wesley F. Usida.

xi
Contents

Part I Architectures of Artificial Neural Networks


and Their Theoretical Aspects
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Fundamental Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Key Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2 Historical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 Potential Application Areas . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Biological Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Artificial Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Partially Differentiable Activation Functions . . . . . . . . . 13
1.3.2 Fully Differentiable Activation Functions . . . . . . . . . . . 15
1.4 Performance Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Artificial Neural Network Architectures
and Training Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Main Architectures of Artificial Neural Networks . . . . . . . . . . . . 21
2.2.1 Single-Layer Feedforward Architecture . . . . . . . . . . . . . 22
2.2.2 Multiple-Layer Feedforward Architectures . . . . . . . . . . 23
2.2.3 Recurrent or Feedback Architecture . . . . . . . . . . . . . . . 24
2.2.4 Mesh Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Training Processes and Properties of Learning . . . . . . . . . . . . . . 25
2.3.1 Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.4 Offline Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.5 Online Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

xiii
xiv Contents

3 The Perceptron Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Operating Principle of the Perceptron . . . . . . . . . . . . . . . . . . . . . 30
3.3 Mathematical Analysis of the Perceptron . . . . . . . . . . . . . . . . . . 32
3.4 Training Process of the Perceptron . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.6 Practical Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 The ADALINE Network and Delta Rule . . . . . . . . . . . . . . . . . . . . . . 41
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Operating Principle of the ADALINE . . . . . . . . . . . . . . . . . . . . . 42
4.3 Training Process of the ADALINE . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Comparison Between the Training Processes of the Perceptron
and the ADALINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.6 Practical Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5 Multilayer Perceptron Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Operating Principle of the Multilayer Perceptron . . . . . . . . . . . . 56
5.3 Training Process of the Multilayer Perceptron . . . . . . . . . . . . . . 57
5.3.1 Deriving the Backpropagation Algorithm . . . . . . . . . . . 58
5.3.2 Implementing the Backpropagation Algorithm. . . . . . . . 69
5.3.3 Optimized Versions of the Backpropagation
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.4 Multilayer Perceptron Applications . . . . . . . . . . . . . . . . . . . . . . . 78
5.4.1 Problems of Pattern Classification . . . . . . . . . . . . . . . . . 78
5.4.2 Functional Approximation Problems
(Curve Fitting) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.3 Problems Involving Time-Variant Systems . . . . . . . . . . 90
5.5 Aspects of Topological Specifications for MLP Networks . . . . . 97
5.5.1 Aspects of Cross-Validation Methods . . . . . . . . . . . . . . 97
5.5.2 Aspects of the Training and Test Subsets . . . . . . . . . . . 101
5.5.3 Aspects of Overfitting and Underfitting Scenarios . . . . . 101
5.5.4 Aspects of Early Stopping . . . . . . . . . . . . . . . . . . . . . . . 103
5.5.5 Aspects of Convergence to Local Minima . . . . . . . . . . . 104
5.6 Implementation Aspects of Multilayer Perceptron Networks . . . . 105
5.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.8 Practical Work 1 (Function Approximation) . . . . . . . . . . . . . . . . 110
5.9 Practical Work 2 (Pattern Classification). . . . . . . . . . . . . . . . . . . 112
5.10 Practical Work 3 (Time-Variant Systems) . . . . . . . . . . . . . . . . . . 114
Contents xv

6 Radial Basis Function Networks. . . . . . . . . . . . . . . . . . . . . . . . . .... 117


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 117
6.2 Training Process of the RBF Network . . . . . . . . . . . . . . . . .... 117
6.2.1 Adjustment of the Neurons from the Intermediate
Layer (Stage I) . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 118
6.2.2 Adjustment of Neurons of the Output Layer
(Stage II) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.3 Applications of RBF Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.4 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.5 Practical Work 1 (Pattern Classification). . . . . . . . . . . . . . . . . . . 133
6.6 Practical Work 2 (Function Approximation) . . . . . . . . . . . . . . . . 135
7 Recurrent Hopfield Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.2 Operating Principles of the Hopfield Network . . . . . . . . . . . . . . 140
7.3 Stability Conditions of the Hopfield Network . . . . . . . . . . . . . . . 142
7.4 Associative Memories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.4.1 Outer Product Method . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.4.2 Pseudoinverse Matrix Method . . . . . . . . . . . . . . . . . . . . 147
7.4.3 Storage Capacity of Memories. . . . . . . . . . . . . . . . . . . . 148
7.5 Design Aspects of the Hopfield Network . . . . . . . . . . . . . . . . . . 150
7.6 Hardware Implementation Aspects . . . . . . . . . . . . . . . . . . . . . . . 151
7.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.8 Practical Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8 Self-Organizing Kohonen Networks . ....... . . . . . . . . . . . . . . . . . . 157
8.1 Introduction . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . . . . 157
8.2 Competitive Learning Process . . ....... . . . . . . . . . . . . . . . . . . 158
8.3 Kohonen Self-Organizing Maps (SOM) . . . . . . . . . . . . . . . . . . . 163
8.4 Exercises. . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . . . . 170
8.5 Practical Work . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . . . . 171
9 LVQ and Counter-Propagation Networks . . . . . . . . . . . . . . . . . . . . . 173
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.2 Vector Quantization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.3 LVQ Networks (Learning Vector Quantization) . . . . . . . . . . . . . 176
9.3.1 LVQ-1 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . 177
9.3.2 LVQ-2 Training Algorithm . . . . . . . . . . . . . . . . . . . . . . 180
9.4 Counter-Propagation Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 182
9.4.1 Aspects of the Outstar Layer . . . . . . . . . . . . . . . . . . . . . 183
9.4.2 Training Algorithm of the Counter-Propagation
Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 184
9.5 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 185
9.6 Practical Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 186
xvi Contents

10 ART (Adaptive Resonance Theory) Networks. . . . . . . . . . . . . . . . . . 189


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.2 Topological Structure of the ART-1 Network . . . . . . . . . . . . . . . 190
10.3 Adaptive Resonance Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.4 Learning Aspects of the ART-1 Network . . . . . . . . . . . . . . . . . . 193
10.5 Training Algorithm of the ART-1 Network . . . . . . . . . . . . . . . . 201
10.6 Aspects of the ART-1 Original Version . . . . . . . . . . . . . . . . . . . 202
10.7 Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
10.8 Practical Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Part II Application of Artificial Neural Networks in Engineering


and Applied Science Problems
11 Coffee Global Quality Estimation Using Multilayer Perceptron . . . 209
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11.2 MLP Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
11.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
12 Computer Network Traffic Analysis Using SNMP Protocol
and LVQ Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
12.2 LVQ Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
12.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
13 Forecast of Stock Market Trends Using Recurrent Networks . . . . . 221
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
13.2 Recurrent Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . 222
13.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
14 Disease Diagnostic System Using ART Networks . . . . . . . . . . . . . . . 229
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
14.2 Art Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
14.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
15 Pattern Identification of Adulterants in Coffee Powder
Using Kohonen Self-organizing Map . . . . . . . . . . . . . . . . . . . . . . . . . 235
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
15.2 Characteristics of the Kohonen Network . . . . . . . . . . . . . . . . . . . 236
15.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
16 Recognition of Disturbances Related to Electric Power Quality
Using MLP Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
16.2 Characteristics of the MLP Network . . . . . . . . . . . . . . . . . . . . . . 243
16.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Contents xvii

17 Trajectory Control of Mobile Robot Using Fuzzy Systems


and MLP Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
17.2 Characteristics of the MLP Network . . . . . . . . . . . . . . . . . . . . . . 249
17.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
18 Method for Classifying Tomatoes Using Computer Vision
and MLP Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
18.2 Characteristics of the Neural Network . . . . . . . . . . . . . . . . . . . . 254
18.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
19 Performance Analysis of RBF and MLP Networks in Pattern
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 259
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 259
19.2 Characteristics of the MLP and RBF Networks
Under Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 260
19.3 Computational Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 260
20 Solution of Constrained Optimization Problems
Using Hopfield Networks . . . . . . . . . . . . . . . . . . . . . ............. 267
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . ............. 267
20.2 Characteristics of the Hopfield Network . . . . . . ............. 268
20.3 Mapping an Optimization Problem
Using a Hopfield Network . . . . . . . . . . . . . . . . ............. 270
20.4 Computational Results . . . . . . . . . . . . . . . . . . . ............. 273
Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Appendix B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Appendix C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Appendix D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Appendix E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
About the Authors

Ivan Nunes da Silva was born in São José do Rio Preto, Brazil, in 1967. He
graduated in computer science and electrical engineering at the Federal University
of Uberlândia, Brazil, in 1991 and 1992, respectively. He received both M.Sc. and
Ph.D. degrees in Electrical Engineering from the State University of Campinas
(UNICAMP), Brazil, in 1995 and 1997, respectively. Currently, he is Associate
Professor at the University of São Paulo (USP). His research interests are within the
fields of artificial neural networks, fuzzy inference systems, power system
automation, and robotics. He is also associate editor of the International Journal on
Power System Optimization and Editor-in-Chief of the Journal of Control,
Automation and Electrical Systems. He has published more than 400 papers in
congress proceedings, international journals, and book chapters.
Danilo Hernane Spatti was born in Araras, Brazil, in 1981. He graduated in
electrical engineering from the São Paulo State University (UNESP), Brazil, in
2005. He received both M.Sc. and Ph.D. degrees in Electrical Engineering from the
Uiversity of São Paulo (USP), Brazil, in 2007 and 2009, respectively. Currently, he
is a Senior Researcher at the University of São Paulo. His research interests are
artificial neural networks, computation complexity, systems optimization and
intelligent systems.
Rogerio Andrade Flauzino was born in Franca, Brazil, in 1978. He graduated in
electrical engineering and also received M.Sc. degree in electrical engineering from
the São Paulo State University (UNESP), Brazil, in 2001 and 2004, respectively.
He received Ph.D. degree in Electrical Engineering from the University of São
Paulo (USP), Brazil, in 2007. Currently, he is Associate Professor at the University
of São Paulo. His research interests include artificial neural networks, computa-
tional intelligence, fuzzy inference systems, and power systems.
Luisa Helena Bartocci Liboni was born in Sertãozinho, Brazil, in 1986. She
graduated in electrical engineering from the Polytechnic School of the University of
São Paulo (USP), Brazil, in 2010. She received Ph.D. degree in Electrical
Engineering from the University of São Paulo (USP), Brazil, in 2016. Currently,

xix
xx About the Authors

she is Senior Researcher at the University of São Paulo. Her research interests
include artificial neural networks, intelligent systems, signal processing, and non-
linear optimization.
Silas Franco dos Reis Alves was born in Marília, Brazil, in 1987. He graduated in
information systems from the São Paulo State University (UNESP). He received
M.Sc. degree in Mechanical Engineering from the State University of Campinas
(UNICAMP) and Ph.D. degree in Electrical Engineering from the University of São
Paulo (USP), Brazil, in 2011 and 2016, respectively. Currently, he is Senior
Researcher at the University of São Paulo. His research interests include robotics,
artificial neural networks, machine learning, intelligent systems, signal processing,
and nonlinear optimization.

You might also like