
Evaluation of Various Neural Network Approaches to Solving Partial Differential Equations
Please login to view abstract download link
With the rise of deep learning, a new body of literature aimed at solving partial differential equations (PDEs) using neural networks has emerged. Central to this approach is the use of the expressive power of neural networks, known as universal approximators, to model the solutions of PDEs. Recently, Kolmogorov-Arnold Networks (KANs), which leverage the Kolmogorov-Arnold representation theorem, have been proposed as a promising alternative to traditional multilayer perceptrons (MLPs) in machine-learning-based PDE solvers. In this presentation, a comparison between Physics-Informed Neural Networks (PINNs), Deep Ritz methods, and Greedy Algorithms such as Galerkin Neural Networks is conducted. The comparison focuses particularly on the performance, training cost, and parameter efficiency of these methods. In terms of networks, we consider both MLPs and KANs. For optimization, we employ three different methods: ADAM, L-BFGS, and the more recent Energy Natural Gradient Descent (ENGD) method. As numerical experiments, we consider stationary elliptic test cases of increasing dimensionality. In particular, we present preliminary results obtained from a simplified flow model in a fractured porous medium. At the time of writing, we have observed that ENGD optimization enhances the network’s convergence speed despite its higher per-iteration computational cost. It also significantly improves the accuracy of the final approximation. Moreover, we have noticed that KANs tend to perform similarly to classical MLPs while being much more efficient in terms of parameters. This work is still in progress, and new results will be presented during the presentation.