FNNS: An Effective Feedforward Neural Network Scheme with Random Weights for Processing Large-Scale Datasets

The rest of this article is divided into the following sections. Section 1 introduces the traditional random weight feed-forward neural network learning algorithm. Section 2 details the work related to this paper. Section 3 describes in detail the optimized random weight feed-forward neural network learning algorithm proposed in this paper. In Section 4 , the experimental simulation results are shown, the algorithm’s performance is examined and appraised, and the possibility of an engineering application is discussed. In Section 4 , the experimental simulation results are given, and the performance of the algorithm is analyzed and evaluated, as well as the prospect of engineering applications. Section 5 is the conclusion of this paper and the planning for the future work.

This study examines a feed-forward neural network model for big datasets that uses random weights and a decomposition approach. In this study, the data were divided into tiny subsets of the same size at random, and each subset was then utilized to generate the associated submodel. The weights and biases of the hidden nodes that determine the nonlinear feature mapping are set randomly and are not learned in the feed-forward neural network with random weights. It is crucial to pick the right interval when selecting the weights and deviations. This topic has not been fully discussed in many studies. The method used in this paper calculates the optimal range of input weights and biases according to the activation function, and each submodel initializes the same input weight and biases within the optimal range. At the same time, an iterative scheme is adopted to evaluate the output weight.

In addition, Ref. [ 12 ] provided a complete discussion of the randomization method of neural networks. In order to ensure the universal approximation property, the development of neural network randomization methods is promoted for the constraints of random weights and biases [ 13 ]. However, data are in a period of rapid expansion due to the ongoing advancement of information technology. The resulting problem is that the number of data samples or neural network hidden layer nodes in the NNRW model becomes very large so the method of calculating the output weight is very time-consuming. In response to this problem, there have been many studies on large-scale data modeling problems in the past few decades. Ref. [ 14 ] explained how to efficiently train language models using neural networks on big datasets. Ref. [ 15 ] provided a kernel framework for energy-efficient large-scale classification and data modeling. Ref. [ 16 ] examined a population density method that makes large-scale neural network modeling possible. Ref. [ 17 ] presented a framework for parallel computing to train massive neural networks. Ref. [ 18 ] presented a multiprocessor computer for simulating neural networks on a huge scale. Reducing the size of the datasets by subsampling is perhaps the easiest method for dealing with enormous datasets. Osuna E, Freund R, and Girosi F were the first to suggest using this decomposition technique [ 19 20 ]. Bao-Liang Lu and Ito, M. (1999) also proposed this method [ 21 ], which is used to solve the problem of pattern classification. However, the method used for large-scale data processing in this paper is similar to the method of dealing with large-scale data by the Bayesian committee SVM proposed by Tresp et al. [ 22 23 ]. In this approach, the datasets are split into equal-sized parts, and models are generated from each subset. Each submodel is trained independently, and a summary is used to get the final determination.

Theoretically, it is clear that the capability of global approximation cannot be guaranteed by the random distribution of input weights and biases [ 5 6 ]. For a variety of reasons, many random learning algorithms emerge endlessly. Ref. [ 7 ] suggested a feed-forward neural network learning method with random weights. Ref. [ 8 ] studied the sparse algorithm of random weight networks and its applications. Ref. [ 9 ] presented a random single hidden layer feed-forward neural network metaheuristic optimization research. The authors of [ 10 ] carried out a study on distributed learning of random vector function chain networks. Ref. [ 11 ] proposed a probability learning algorithm based on a random weight neural network for robust modeling.

Feed-forward neural networks (FNNs) have gained increasing attention in recent years because of their flexible structural design and strong representational capacity. Feed-forward neural networks [ 1 ], which have adaptive characteristics and universal approximation characteristics, have been widely used in regression and classification. In addition, they offers study models for a wide range of natural and artificial processes, and it has been used in numerous technical and scientific domains [ 2 ]. In the traditional neural network theory, all the parameters of FNNs, such as input weight, bias, and output weight, need to be adjusted under specific conditions. The hierarchy of the network structure, however, makes this process complex and ineffective. The usual method is a gradient-based optimization method, such as the BP algorithm, but this method usually has some problems such as local minimum, slow convergence speed, sensitive learning speed, and so on. In addition, some parameters, such as the hidden node count or the learning algorithm parameters, need to be manually adjusted. In order to solve this series of problems, as the times require, Schmidt, Kraaijveld, and Duin first proposed the FNNRW in 1992 [ 3 ]. The output weights may be evaluated and estimated using the well-known least-square approach since the input weights and biases’ random distribution are uniformly distributed in [−1, 1]. Many simulation results in the literature show that the randomization model has higher performance than the fully adaptive model, and provides simpler implementation and faster training speed [ 4 ].

2. The Related Work

This section introduces the development of feed-forward neural networks (FNN) and related works. This paper first discusses the history of artificial neural networks (ANNS) and the use of feed-forward neural networks in real-world applications. It next discusses random weight feed-forward neural networks, their optimization, and ultimately our optimization strategy. This paper first introduces the origin of artificial neural networks (ANNS) and the practical application of feed-forward neural networks, then introduces the optimization and application of random weight feed-forward neural networks, and finally presents our optimization scheme.

An artificial neural network (ANN), also referred to as a neural network (NN), is a mathematical model of hierarchically distributed information processing by imitating the behavior characteristics of animal brain neural network [ 24 ], through the relationship between various neurons, mainly by regulating the relationship between a large number of internal nodes, to achieve the purpose of data processing [ 25 ]. The logician W. Pitts and psychologist W.S. Macculloch created the mathematical MP models and neural networks in 1943. The age of artificial neural network research began when they proposed the formal mathematical description of neurons and the network structure approach through the MP model, and demonstrated that a single neuron can carry out logical functions [ 26 ]. Artificial neural networks have many model structures, and feed-forward neural networks are only one of them [ 27 ].

Frank Rosenblatt created the perceptron, an artificial neural network, in 1957. The perceptron is a simple neural network in which each neuron is arranged in layers and is only connected to the previous layer. The output of the previous layer is received and exported to the next layer, and there is no feedback between neurons in each layer. This is the earliest form of a feed-forward neural network (FNN). The feed-forward neural network is one of the most popular and rapidly evolving artificial neural networks due to its straightforward construction. The study of feed-forward neural networks started in the 1960s, and both theoretical and practical advancements have been made. The FNN can be regarded as a multilayer perceptron. With all the links between layer and layer, it is a kind of typical deep learning model. The performance of the traditional model, with large data samples and outstanding performance, can solve the problems that some traditional machine learning models cannot understand. However, deep learning models with small data samples are complex, making the process difficult to explain. The FNN also shares these characteristics, so it is mainly used in scenarios with large datasets. A feed-forward neural network-based approach for creating rocket trajectories online is presented in [ 28 ], and the trajectory is roughly estimated utilizing the neural network’s nonlinear mapping capability. In [ 29 ], the study of source term inversion of nuclear accidents uses deep feed-forward based neural networks. The Bayesian MCMC technique is used to examine the DFNN’s prediction uncertainty when the input parameters are unclear. In [ 30 ], with the chaotic encryption of the polarization division multiplexing OFDM/OQAM system, the feed-forward neural network is used to increase data transmission security and realize a huge key space. In [ 31 ], a feed-forward backpropagation artificial neural network is used to predict the force response of linear structures, which helps researchers understand the mechanical response properties of complex joints with special nonlinearities. In [ 32 ], the initial weight approach and the construction algorithm are combined to form a novel method of feed-forward neural network multi-class classification that can achieve high success rates. In [ 33 ], hybrid MVO and FNN were improved for fault identification of WSN cluster head data. It is clear that feed-forward neural networks are used in a variety of industries.

34,35,36,

Feed-forward neural networks demonstrate the superiority of mathematical models in a variety of applications, but as the size of the dataset keeps growing, the feed-forward neural network’s original performance cannot keep up with the demands of engineering. As a result, many researchers have focused on developing improved feed-forward neural networks to deal with large-scale datasets. The feed-forward neural network is optimized primarily from two aspects: on the one hand, random weighting by weight set in selected is used to enhance the performance of the algorithm, as mentioned in [ 33 37 ]; on the other hand, large datasets are processed by using the method of sample selection, as described in [ 38 39 ], in order to enhance the algorithm’s performance. The method of random weight optimization is commonly used by scholars. Because it offers effective learning capabilities, feed-forward neural networks are frequently employed in mathematical modeling [ 34 ]. Recently, some advanced stochastic learning algorithms have been developed slowly, a feed-forward neural network with a random hidden nodes approach was proposed in [ 34 ]. Weights and bias are generated randomly depending on the input data and kinds of activation functions, allowing the model’s level of generalization to be controlled. In [ 35 ], a training iterative solution for large-scale datasets is developed, and the regularization model is used to initially generate a learning model with improved generalization ability. When dealing with large-scale datasets, good applicability and effectiveness are achieved. In [ 36 ], a distributed learning algorithm of a feed-forward neural network with random weights is proposed by using an event-triggered communication scheme. To reduce needless transmission overhead, the method adopts a discrete-time zero-gradient and strategy solution and introduces an event-triggered communication approach. In [ 37 ], a new feed-forward neural network method for initializing weights is proposed, which linearizes the whole network at the equilibrium point, especially the initial weights and deviations. Ref. [ 40 ] offered a linear algebraic approach based on the Cauchy inequality that guaranteed the output of neurons in the active area, sped up convergence, and drastically decreased the number of algorithm iterations for sample extraction. Many scholars discuss hot topics [ 38 ] such as combining Monte Carlo simulations (MCSs) and implementing efficient sampling of feed-forward neural networks (FNNs). The authors of [ 39 ] put forward a kind of incremental learning method based on a hybrid fuzzy neural network framework from the angle of the dataset to improve the performance of the algorithm.

Feed-forward neural network optimization has drawn a lot of interest in the era of big data. To lower the sample size, feed-forward neural network optimization now primarily uses the processing of random weights, although researchers have only been able to increase the performance of these networks in isolation. In order to handle enormous datasets, this research suggests a random weighting feed-forward neural network scheme based on a decomposition approach. This scheme not only ensures the network’s integrity but also the random performance of feed-forward neural networks. However, the sample feature extraction’s actual applicability is not exhaustive; large-scale datasets only use random weights, which have poor feed-forward neural network performance.