Skip to main content

Efficient evolution of asymmetric recurrent neural networks using a PDGP-inspired two-dimensional representation

  • Conference paper
  • First Online:
Genetic Programming (EuroGP 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1391))

Included in the following conference series:

Abstract

Recurrent neural networks are particularly useful for processing time sequences and simulating dynamical systems. However, methods for building recurrent architectures have been hindered by the fact that available training algorithms are considerably more complex than those for feedforward networks. In this paper, we present a new method to build recurrent neural networks based on evolutionary computation, which combines a linear chromosome with a two-dimensional representation inspired by Parallel Distributed Genetic Programming (a form of genetic programming for the evolution of graph-like programs) to evolve the architecture and the weights simultaneously. Our method can evolve general asymmetric recurrent architectures as well as specialized recurrent architectures. This paper describes the method and reports on results of its application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. E. Fahlman and C. Lebiere. A recurrent cascade-correlation learning architecture. In R. Lippmann, J. Moody, and D. Touretzky, editors, Advances in Neural Information Processing Systems, volume 3, pages 190–196. Morgan Kaufmann, 1991.

    Google Scholar 

  2. C. L. Giles and W. Omlin. Pruning recurrent neural networks. IEEE Transactions on Neural Networks, 5(5):848–855, 1994.

    Article  Google Scholar 

  3. S. Haykin. Neural networks, a comprehensive foundation. Macmillan College Publishing Company, Inc., 866 Third Avenue, New York, New York 10022, 1994.

    Google Scholar 

  4. J. Hertz, K. Anders, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company, Redwood, California, 1991.

    Google Scholar 

  5. F. J. PINEDA. Generalization of backpropagation to recurrent neural networks. Physical Review Letters, 59(19):2229–2232, Nov. 1987.

    Article  MathSciNet  Google Scholar 

  6. D. R. Hush and B. G. Horne. Progress in supervised neural networks. IEEE Signal Processing Magazine, pages 8–39, Jan. 1993.

    Google Scholar 

  7. A. M. Logar, E. M. Corwin, and W. J. B. Oldham. A comparison of recurrent neural network learning algorithms. In IEEE International Conference on Neural Networks, pages 1129–1134, Stanford University, 1993.

    Google Scholar 

  8. T. BÄck, G. Rudolph, and H. Schwefel. Evolutionary programming and evolution strategies: Similarities and differences. In Proceedings of the Second Annual Conference on Evolutionary Programming, pages 11–22. Evolutionary Programming Society, 1993.

    Google Scholar 

  9. D. Goldberg. Genetic algorithm in search, optimization and machine learning. Addison-Wesley, Reading, Massachusets, 1989.

    Google Scholar 

  10. K. Lindgren, A. Nilsson, M. Nordahl, and I. Rade. Evolving recurrent neural networks. In Proceedings of the International Conference on Artificial Neural Nets and Genetic Algorithms (ANNGA), pages 55–62, 1993.

    Google Scholar 

  11. P. J. Angeline, G. M. Saunders, and J. B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5(1), 1994.

    Google Scholar 

  12. J. McDonnell and D. Waagen. Neural network structure design by evolutionary programming. In D. Fogel and W. Atmar, editors, Proceedings of the Sec. Annual Conference on Evolutionary Programming, pages 79–89, La Jolla, CA, USA, Feb. 1993. Evolutionary Programming Society.

    Google Scholar 

  13. J. McDonnell and D. Waagen. Evolving recurrent perceptions for time-series modelling. IEEE Transactions on Neural Networks, 5(1):24–38, Jan. 1994.

    Article  Google Scholar 

  14. X. Yao and Y. Liu. Evolving artificial neural networks through evolutionary programming. In Proceedings of the 5th Annual Conference on Evolutionary Programming, San Diego, CA, USA, Feb/Mar 1996. MIT Press.

    Google Scholar 

  15. V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1):39–53, 1994.

    Article  Google Scholar 

  16. D. Whitley, S. Dominic, R. Das, and C. Anderson. Genetic reinforcement learning for neurocontrol problems. Machine Learning, 13:259–284, 1993.

    Article  Google Scholar 

  17. M. Mandischer. Evolving recurrent neural networks with non-binary encoding. In Proceedings of the 2nd IEEE Conference on Evolutionary Computation (ICEC), volume 2, pages 584–589, Perth, Australia, Nov. 1995.

    Google Scholar 

  18. J. Santos and R. Duro. Evolutionary generation and training of recurrent artificial neural networks. In Proceedings of the first IEEE Conference on Evolutionary Computation (ICEC), volume 2, pages 759–763, Orlando, FL, USA, Jun. 1994.

    Google Scholar 

  19. S. Bornholdt and D. Graudenz. General asymmetric neural networks and structure design by genetic algorithms. Neural Networks, 5:327–334, 1992.

    Article  Google Scholar 

  20. F. Marin and F. Sandoval. Genetic synthesis of discrete-time recurrent neural network. In Proceedings of International Workshop on Artificial Neural Network (IWANN), pages 179–184. Springer-Verlag, 1993.

    Google Scholar 

  21. J. R. Koza. Genetic Programming, on the Programming of Computers by Means of Natural Selection. The MIT Press, Cambridge, Massachusets, 1992.

    Google Scholar 

  22. B. Zhang and H. Muehlenbein. Genetic programming of minimal neural nets using Occam’s razor. In S. Forrest, editor, Proceedings of the 5th international conference on genetic algorithms (ICGA’93), pages 342–349. Morgan Kaufmann, 1993.

    Google Scholar 

  23. F. Gruau. Neural network synthesis using cellular encoding and the genetic algorithm. PhD thesis, Laboratoire de L’informatique du Parallélisme, Ecole Normale Supériere de Lyon, Lyon, France, 1994.

    Google Scholar 

  24. R. Poli. Some steps towards a form of parallel distributed genetic programming. In Proceedings of the First On-line Workshop on Soft Computing, pages 290–295, Aug. 1996.

    Google Scholar 

  25. R. Poli. Discovery of symbolic, neuron-symbolic and neural networks with parallel distributed genetic programming. In 3rd International Conference on A rtificial Neural Networks and Genetic Algorithms (ICANNGA), 1997.

    Google Scholar 

  26. J. C. F. Pujol and R. Poli. Evolution of the topology and the weights of neural networks using genetic programming with a dual representation. Technical report CSRP-97-07, The University of Birmingham, School of Computer Science, 1997.

    Google Scholar 

  27. J. C. F. Pujol and R. Poli. A new combined crossover operator to evolve the topology and the weights of neural networks using a dual representation. Technical report CSRP-97-12, The University of Birmingham, School of Computer Science, 1997.

    Google Scholar 

  28. J. C. F. Pujol and R. Poli. Evolving neural controllers using a dual network representation. Technical report CSRP-97-25, The University of Birmingham, School of Computer Science, 1997.

    Google Scholar 

  29. J. L. Elman. Finding structure in tima. Cognitive Science, 14:179–211, 1990.

    Article  Google Scholar 

  30. J. T. Connors and R. D. Martin. Recurrent neural networks and robust series prediction. IEEE Transactions on Neural Networks, 5(2):240–254, Mar. 1994.

    Article  Google Scholar 

  31. T. Lin, B. G. Horne, P. Tino, and C. L. Giles. Learning long-term dependencies in narx recurrent neural networks. IEEE Transactions on Neural Networks, 7(6), Nov. 1996.

    Google Scholar 

  32. H. T. Siegelmann, B. G. Horne, and C. L. Giles. Computational capabilities of recurrent narx neural networks. IEEE Transactions on Systems, Man and Cybernetics, 27(8), Mar. 1997.

    Google Scholar 

  33. R. Collins and D. Jefferson. Antfarm: toward simulated evolution. In C. Langton, C. Taylor, J. Farmer, and S. Rasmussen, editors, Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, volume X. Addison-Wesley, 1991.

    Google Scholar 

  34. W. Langdon and R. Poli. Why ants are hard. Technical report CSRP-98-04, The University of Birmingham, School of Computer Science, 1998.

    Google Scholar 

  35. D. Jefferson, R. Collins, C. Cooper, M. Dyer.M. Flowers, R. Korf, C. Taylor, and A. Wang. Evolution as a theme in artificial life: The genesys/tracker system. In C. Langton, C. Taylor, J. Farmer, and S. Rasmussen, editors, Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, volume X. Addison-Wesley, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wolfgang Banzhaf Riccardo Poli Marc Schoenauer Terence C. Fogarty

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pujol, J.C.F., Poli, R. (1998). Efficient evolution of asymmetric recurrent neural networks using a PDGP-inspired two-dimensional representation. In: Banzhaf, W., Poli, R., Schoenauer, M., Fogarty, T.C. (eds) Genetic Programming. EuroGP 1998. Lecture Notes in Computer Science, vol 1391. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0055933

Download citation

  • DOI: https://doi.org/10.1007/BFb0055933

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64360-9

  • Online ISBN: 978-3-540-69758-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics