Distributed learning machines for solving forward and inverse problems in partial differential equations.

Published in "Neurocomputing"
Vikas Dwivedi , Nishant Parashar , Balaji Srinivasan

We conceptualize Distributed Learning Machines (DLMs) – a novel machine learning approach that integrates existing machine learning algorithms with traditional mesh-based numerical methods for solving forward and inverse problems in nonlinear partial differential equations (PDEs). In conventional numerical methods such as finite element method (FEM), the discretization of the computational domain is a standard technique to reduce the representation load of basis functions. Along the same lines, we propose a distributed neural network architecture that facilitates the simultaneous deployment of several localized neural networks to solve PDEs in a unified manner. The most critical requirement of the DLMs is the synchronization of the distributed neural networks. For this, we introduce a new physics-based interface regularization term to the cost function of the existing learning machines like the Physics Informed Neural Network (PINN) and the Physics Informed Extreme Learning Machine (PIELM). To evaluate the efficacy of this approach, we develop three distinct variants of DLM namely, time-marching Distributed PIELM (DPIELM), Distributed PINN (DPINN) and time-marching DPINN. We show that ideas of linearization and time-marching allow DPIELM to be able to solve nonlinear PDEs to some extent. Next, we show that DPINNs have potential advantages over existing PINNs to solve the inverse problems in heterogeneous media. Finally, we propose a rapid, time-marching version of DPINN which leverages the ideas of transfer learning to accelerate the training. Collectively, this framework leads towards the promise of hybrid Neural Network-FVM or Neural Network-FEM schemes in the future.