Taylor & Francis Group
Browse

The convergence of a generalized convex neural network

dataset
posted on 2023-11-26, 09:40 authored by Lei Chen, Yilin Wang, Yunhe Wu, Lixiao Zhang

The convergence of neural networks is a central area of research that is essential in understanding the universal approximation capability and the structural complexity of these systems. In this study, we investigate a generalized convex incremental iteration method. This iteration method is presented in a more general form than previous studies and is capable of scraping a broader range of weight parameters. Additionally, we systematically demonstrate the convergence rate of the convex iteration. Our findings are also easily extensible to deep convolutional neural networks, which helps to explain their effectiveness in various real-world applications. Furthermore, we provide a discrete statistical perspective to address issues of input data non-compactness and unknowability of the objective function in practical settings. To support our conclusions, we propose two algorithms, namely back propagation and random search, the latter of which can prevent the network from getting stuck in a local minimum during training. Finally, we present results from several regression problems, which indicate that our algorithms exhibit superior performance and are consistent with our theoretical predictions. These results provide a more comprehensive understanding of the convergence of neural networks and its significance in the universal approximation capability of these systems.

History