Boundary regression loss

To enforce \(u|_{\partial \Omega}=0\) required in \(u\in H_0^1(\Omega)\), we need to pose regression loss

\[ L_\text{reg}(\theta; B) = \sum_{B} \left[ u(x_j,y_j;\theta) \right]^2, \hspace{1em} B = \{(x_j, y_j)\}_{j} \subset \partial\Omega. \]

The Ritz-Rayleigh quotient can be discretized to

\[ L_\text{var}(\theta; A) = \frac{\sum_{A} \left[ u_x^2(x_j,y_j;\theta) + u_y^2(x_j, y_j;\theta) \right]}{\sum_{A} u^2(x_j, y_j;\theta) }, \hspace{1em} A = \{(x_j, y_j)\}_{j} \subset \mathring{\Omega}. \]

The total loss to minimize is \[ L_\text{total}(\theta;A\cup B) = L_\text{var}(\theta; A) + \beta\cdot L_\text{reg}(\theta; B). \]

Here the boundary loss is much more important than variational loss to ensure successful training:

\[ \beta\cdot|B| \gg |A|. \]

To avoid searching around the zero function \(u(\cdot;\theta)\approx0\), we pretrain \(\theta\) to fit a prior function.

\[ \text{prior}(x,y)\propto \exp\left(-\frac{1}{\text{dist}((x,y), \partial\Omega )} \right) \]

This is because the first eigenfunction doesn't change sign and can be chosen positive.