A neural-network learning theory and a polynomial time RBF algorithm

IEEE Trans Neural Netw. 1997;8(6):1301-13. doi: 10.1109/72.641453.

Abstract

This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of "truncated" RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this "mixed" RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an online adaptive algorithm.