Evolving artificial neural networks with feedback

Neural Netw. 2020 Mar:123:153-162. doi: 10.1016/j.neunet.2019.12.004. Epub 2019 Dec 14.

Abstract

Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2-10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning.

Keywords: Convolutional neural network; Deep learning; Feedback; Transfer entropy.

MeSH terms

  • Deep Learning / standards*
  • Feedback*
  • Practice Guidelines as Topic