NEURAL PROCESSING LETTERS, cilt.55, sa.1, ss.857-872, 2023 (SCI-Expanded)
The extreme learning machine (ELM), a new learning algorithm for single hidden layer feedforward neural networks (SLFN), has drawn interest of a large number of researchers, especially due to its training speed and good generalization performances compared to known machine learning methods. The ELM model generates a solution to a linear optimization problem for the hidden layer output weights by randomly generating input weights and biases, instead of iteratively adjusting the network parameters (weights and biases) such as the backpropagation neural network model using the backpropagation algorithm and gradient descent learning. However, random determination of input weights and hidden layer bias can result in non-optimal parameters that have an adverse impact on the final results or require a higher number of hidden nodes for the neural network. In this study, a new hybrid method is proposed to overcome the drawbacks caused by non-optimal input weights and hidden biases. This hybrid method, which is called CPN-ELM algorithm, uses the counter propagation network (CPN) model to systematically optimize input weights, hidden layer neurons and hidden biases. The performance of CPN-ELM compared to traditional ELM method was examined on three benchmark regression datasets and we observed that the model produced higher accuracy values for each datasets.