Local Approximation Order of Generalized Multiquadric Radial Basis Functions Interpolation
- B. V. Iyorter
- 406-415
- Aug 12, 2024
- Mathematics
ABSTRACT
INTRODUCTION
<p>The function \( \phi(r) = (1 + \epsilon r^2)^\beta \) where \( \beta = \dots, -\frac{3}{2}, -\frac{1}{2}, \frac{1}{2}, \frac{3}{2}, \dots \)</p>
<p>Given data points \( x_j \) where \( j = 1, 2, \dots, n \), the interpolant \( s(x) \) satisfies \( s(x_j) = f(x_j) \).</p>
The generalized multiquadric RBF covers a wide range of infinitely differentiable RBFs including the Hardy’s multiquadric function with
\(\beta = \frac{1}{2}\)
, the inverse multiquadric function with
\(\beta = -\frac{1}{2}\)
, and the inverse quadrics with
\(\beta = -1\)
. For convenience, we will refer to multiquadric functions with
\(\beta > 0\)
as the generalized multiquadric functions, while those with
\(\beta < 0\)
as the generalized inverse multiquadric (GIMQ) functions.
For
\(\beta < 0\)
, the GIMQ function is strictly positive definite, and for
\(0 < \beta < 1\)
, the generalized multiquadric function is conditionally positive definite of order one. In either case, the system matrix for the interpolation problem is invertible. For
\(\beta > 1\)
, the generalized multiquadric function is conditionally positive definite of order
\(\lceil \beta \rceil\)
(where \(\lceil \cdot \rceil\) denotes the ceiling function), and to show that the system matrix is invertible, it is necessary to attach low order polynomials to the RBF interpolant.
In this paper, we will concentrate on generalized multiquadric RBFs of the form
\(\phi(r) = \left(1 + \epsilon r^2\right)^\beta\)
, where
\(\beta = \frac{1}{2}, \frac{3}{2}, \frac{5}{2}, \ldots\)
.
RADIAL BASIS FUNCTION INTERPOLATION
Suppose we are given data in the form \( (x_i, f_i) \) where \( f_i = f(x_i) \), \( i = 1, 2, \dots, n \). Our goal is to find an interpolant \( s(x) \), \( x \in \mathbb{R}^d \) satisfying
\[
s(x_i) = f_i \quad \text{for } i = 1, 2, \dots, n.
\]
For a strictly positive definite radial basis function interpolant, it is required that \( s(x) \) be a linear combination of translates of \( \phi(x) \), that is
\[
s(x) = \sum_{i=1}^n c_i \phi(x – x_i), \quad x \in \mathbb{R}^d.
\]
For a conditionally positive definite RBF interpolant, a low-order polynomial \( p \in \Pi_{m-1}(\mathbb{R}^d) \) is added to the equation above. This means that we let \( s(x) \) have the form
\[
s(x) = \sum_{i=1}^n c_i \phi(x – x_i) + \sum_{j=1}^q d_j p_j(x), \quad x \in \mathbb{R}^d,
\]
where \( \Pi_{m-1}(\mathbb{R}^d) \) is the space of polynomials from \( \mathbb{R}^d \) to \( \mathbb{R} \) of degree at most \( m-1 \), \( q \in \mathbb{N} \) is the degree of \( \Pi_{m-1}(\mathbb{R}^d) \), \( \| \cdot \| \) is the Euclidean norm, and \( \phi \) is the radial basis function with the additional constraints
\[
\sum_{i=1}^n c_i p_j(x_i) = 0, \quad j = 1, 2, \dots, q.
\]
Adding the extra constraints and the polynomial conditions to the interpolant, we have the system of linear equations
\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix},
\]
where \( \Phi = (\phi(x_i – x_j))_{1 \leq i, j \leq n} \), \( P \) is the matrix associated with the polynomial terms, \( c \) and \( d \) are coefficient vectors, and \( f \) is the vector of function values.
GENERALIZED MULTIQUADRIC RADIAL BASIS FUNCTION INTERPOLATION
To extend the concept of radial basis function interpolation for finite volume methods, we generalize the concept of interpolation. Let \( \Omega = \{ \lambda_1, \lambda_2, \dots, \lambda_n \} \) denote a set of linear differentials, and \( \{ f_1, f_2, \dots, f_n \} \in \mathbb{R} \) are certain given function values. Then the RBF approximation augmented by a polynomial is given by
\[
s(x) = \sum_{i=1}^n c_i \lambda_i(y) \phi(x – y) + \sum_{j=1}^q d_j p_j(x),
\]
where \( \lambda_i(y) \) denotes the functional applied to \( \phi(x – y) \) as a function of \( y \) with \( x \) fixed. The interpolation problem is
\[
\lambda_i(s) = \lambda_i(f), \quad i = 1, 2, \dots, n,
\]
where \( \lambda_i(f) = f_i \) for \( i = 1, 2, \dots, n \). Since the problem is underdetermined, we add the constraints
\[
\sum_{j=1}^n c_j \lambda_i(p_j) = 0, \quad j = 1, 2, \dots, q.
\]
In matrix terms, the problem is to find a solution to the system of equations
\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix}.
\]
LAGRANGE REPRESENTATION OF INTERPOLANT
Sometimes it is useful to work with the Lagrange representation of the interpolant
\[
s(x) = \sum_{i=1}^n L_i(x) f(x_i),
\]
where the Lagrange basis functions \( L_1(x), \dots, L_n(x) \) satisfy
\[
L_i(x_j) = \delta_{ij}, \quad \text{where } \delta_{ij} \text{ is the Kronecker delta}.
\]
<p>For a fixed point \( x \in \mathbb{R}^d \), the vector \( L(x) = [L_1(x), \dots, L_n(x)]^T \in \mathbb{R}^n \) and \( \mu(x) = [\mu_1(x), \dots, \mu_n(x)]^T \in \mathbb{R}^n \) are the unique solution of the linear system</p>
\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} L(x) \\ \mu(x) \end{pmatrix} = \begin{pmatrix} R(x) \\ S(x) \end{pmatrix}
\]
<p>where \( R(x) = [\phi(x – x_j)]_{j=1, \dots, n} \in \mathbb{R}^n \) and \( S(x) = [p_1(x), \dots, p_m(x)] \in \mathbb{R}^n \).</p>
<p>We can write the above linear system as \( A \nu(x) = b(x) \), letting</p>
\[
A(x) = \begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix}, \quad \nu(x) = \begin{pmatrix} L(x) \\ \mu(x) \end{pmatrix}, \quad b(x) = \begin{pmatrix} R(x) \\ S(x) \end{pmatrix}
\]
<p>Therefore</p>
\[
s(x) = L(x)^T f = \nu(x)^T f(X) = A^{-1} \cdot b(x) f(X) = b(x) \cdot A^{-1} \cdot f(X) = b(x) \cdot b
\]
<p>where \( \cdot \) denotes the inner product of the Euclidean space \( \mathbb{R}^d \) and where we let \( f(X) = [f_0] \in \mathbb{R}^{n+m} \) and \( b = [c^d] \in \mathbb{R}^{n+m} \).</p>
<h3>LOCAL APPROXIMATION ORDER OF THE GENERALIZED MULTIQUADRIC INTERPOLANT</h3>
<p>Following the approach of Iske (2003), the convergence rate of the generalized multiquadric RBF interpolation and the approximation order of the local multiquadric RBF interpolant is given.</p>
<p>Regarding the approximation order of the generalized multiquadric interpolant, we consider solving for some fixed point \( x_0 \in \mathbb{R}^d \) and any \( h > 0 \)</p>
\[
u(x_0 + h x_i) = s_h(x_0 + h x_i), \quad 1 \leq i \leq n
\]
<p>where \( X = \{x_1, \dots, x_n\} \subset \mathbb{R}^d \) is a \( \Pi_{m-1}(d) \)–unisolvent point set of moderate size that is \( n \) is small. Moreover, \( s_h \) denotes the unique generalized multiquadric interpolant of the form</p>
\[
s_h(x) = \sum_{i=1}^n c_i^h \phi_\beta(hx – h x_i) + \sum_{j=1}^q d_j p_j(hx)
\]
<p><strong>Definition 2:</strong> Let \( s_h \) denote the generalized multiquadric interpolant using \( \phi_\beta \) satisfying Eq. (3.2). We say that the approximation order of local generalized multiquadric interpolation at \( x_0 \in \mathbb{R}^d \) and with respect to the function space \( F \) is \( p \) if and only if for any \( u \in F \)</p>
\[
u(x_0 + h x) – s_h(x_0 + h x) = O(h^p), \quad h \to 0
\]
<p>holds for any \( x \in \mathbb{R}^d \) and any finite \( \Pi_{m-1}(d) \)–unisolvent point set \( X \subset \mathbb{R}^d \).</p>
<p>We let \( x_0 = 0 \). Now \( c^h = [c_1^h, \dots, c_n^h]^T \in \mathbb{R}^n \) and \( d^h = [d_1^h, \dots, d_q^h]^T \in \mathbb{R}^q \) can be found by solving the linear system</p>
\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} c^h \\ d^h \end{pmatrix} = u(hx_0)
\]
<p>where we let</p>
\[
A_h = [\phi_\beta(hx_i – h x_j)]_{1 \leq i, j \leq n} \in \mathbb{R}^{n \times n}, \quad P_h = [p_j(x_i)]_{1 \leq i \leq n, 1 \leq j \leq q} \in \mathbb{R}^{n \times q}, \quad u(hx) = [u(hx_i)]_{1 \leq i \leq n} \in \mathbb{R}^n
\]
<p>We can write the above system as \( A \cdot b^h = u^h \), that is for notational brevity we let</p>
\[
A_h = \begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix}, \quad b^h = \begin{pmatrix} c^h \\ d^h \end{pmatrix}, \quad u^h = u(hx_0)
\]
<p>We assume that the interpolant \( s_h \) has a Lagrange type representation</p>
\[
s_h(x) = \sum_{i=1}^n L_i^h(x) u(h x_i)
\]
<p>where</p>
\[
\sum_{i=1}^n L_i^h(x) p_j(x_i) = p_j(x) \quad \text{for all } p \in \Pi_{m-1}(d)
\]
<p>For \( x \in \mathbb{R}^d \), the vector \( L^h(hx) = [L_1^h(hx), \dots, L_n^h(hx)]^T \in \mathbb{R}^n \) together with \( \mu^h(hx) = [\mu_1^h(hx), \dots, \mu_q^h(hx)]^T \in \mathbb{R}^q \) is the unique solution of the linear system</p>
\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} L^h(hx) \\ \mu^h(hx) \end{pmatrix} = \begin{pmatrix} R_h(hx) \\ S_h(hx) \end{pmatrix}
\]
<p>where</p>
\[
R_h(hx) = [\phi_\beta(hx – h x_j)]_{1 \leq j \leq n} \in \mathbb{R}^n, \quad S_h(hx) = [p_1(hx), \dots, p_q(hx)] \in \mathbb{R}^q
\]
<p>and can be written as \( A_h \cdot \nu^h(hx) = b^h(hx) \).</p>
<p>Now</p>
\[
D^\alpha s_h(hx) = \sum_{i=1}^n D^\alpha L_i^h(hx) u(h x_i)
\]
<p>where \( D^\alpha L_i^h(hx) \) and \( D^\alpha \mu^h(hx) \) are the unique solution of the linear system</p>
\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} D^\alpha L^h(hx) \\ D^\alpha \mu^h(hx) \end{pmatrix} = \begin{pmatrix} D^\alpha R_h(hx) \\ D^\alpha S_h(hx) \end{pmatrix}
\]
<p>Let \( h = 1 \) then</p>
\[
\sum_{j=1}^n L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q \mu_k^1(x) p_k(x_j) = \phi(x – x_i)
\]
<p>and</p>
\[
\sum_{j=1}^n D^\alpha L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q D^\alpha \mu_k^1(x) p_k(x_j) = D^\alpha \phi(x – x_i)
\]
<p>Note that if</p>
\[
\phi(x-y) = (1 + \|x-y\|^2)^\beta
\]
\[
\phi(hx – hy) = (1 + h^2 \|x-y\|^2)^\beta = h^{2\beta}(1 + \|x-y\|^2)^\beta = h^{2\beta} \phi(x-y)
\]
<p>Then</p>
\[
\sum_{j=1}^n L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q \mu_k^1(x) p_k(x_j) = \phi(x – x_i)
\]
<p>Let \( t(x-y) = \phi(x-y) – \phi(x-y) \) and</p>
\[
\sum_{j=1}^n L_j^1(x) t(x_i – x_j) = t(x_i – x)
\]
\[
h^{2\beta} \sum_{j=1}^n L_j^1(x) t(x_i – x_j) = h^{2\beta} t(x_i – x)
\]
<p>Adding the equations gives us</p>
\[
\sum_{j=1}^n L_j^1(x) h^{2\beta} \phi(x-y) + \sum_{k=1}^q h^{2\beta} \mu_k^1 p(x_i) = h^{2\beta} \phi(x-x_i)
\]
<p>Now if we define \( u_k^h \) as</p>
\[
u_k^h = h^{2\beta} \mu_k^1, \quad k = 1, 2, \dots, q
\]
<p>then for each \( x \in \mathbb{R}^d \) the vector \( L_j^1(x) \), where \( j = 1, 2, \dots, n \) and \( u_k^h \), where \( k = 1, 2, \dots, q \), solves the linear system in Eq. (3.3). Since the solution is unique, we have \( L_j^h(x) = L_j^1(x) \), where \( j = 1, 2, \dots, n \), or \( L_h(x) = L_1(x) \).</p>
<p>We now draw conclusions on the approximation order of the local multiquadric interpolation with respect to \( C^p \). To this end, let us consider \( u \in C^p \), the \( p \)th order Taylor polynomial of \( u \), and \( hx \) is given as</p>
\[
T_p(y) = \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (y-hx)^\alpha
\]
<p>where \( p = \beta \). By using</p>
\[
u(hx) = T_p(y) – \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (y-hx)^\alpha
\]
<p>so that</p>
\[
u(hx) = T_p(x_i) – \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (x_i-hx)^\alpha
\]
<p>This leads to</p>
\[
u(hx) – s_h(x) = \sum_{i=1}^n L_i^h(hx) T_p(x_i) – f(hx)
\]
<p>Now by our result, the Lebesgue constant</p>
\[
\Lambda = \sup_{h \to 0} \sum_{i=1}^n L_i^h(hx) = \sum_{i=1}^n L_i^1(hx)
\]
<p>is bounded so that</p>
\[
u(hx) – s_h(x) = O(h^p), \quad h \to 0
\]
<p>Therefore, the approximation order of the generalized multiquadric interpolant using \( \phi_\beta \) with respect to \( C^p \) is \( p \) where \( p = \beta \).</p>
CONCLUSION
REFERENCES
- Alipanah, A. (2016). Multiquadric Radial Basis Function Method for the Solution of Brachistochrone Problem. Romanian Journal of Mathematics and Computer Science, 6(2): 126 – 133.
- Amidror, I. (2002). Scattered Data Interpolation Methods for Electronic Imaging Systems: A Survey. Journal of Electronic Imaging, 11(2): 157 – 176.
- Ara ̀ndiga, F., Donat, R., Romani, L. and Rossini, M. (2020). On the Reconstruction of Discontinuous Functions Using Multiquadric RBF-WENO Local Interpolation Technique. Mathematics and Computers in Simulation, 176: 4 – 24.
- Bustamante, C. A., Flo ́rez, W. F., Power, H. and Giraldo, M. (2010). Control Volume-Radial Basis Function Method for Two-Dimensional Non-Linear Heat Conduction and Convection Problems. WIT Transactions on Modeling and Simulation, 50: 1 – 12.
- Chen, Y. and Cao, Y. (2021). A Coupled RBF Method for the Solution of Elastostatic Problems. Mathematics Problems in Engineering, 2021: 1 – 15.
- Chenoweth, M. E. and Sarra, S. (2009). A Numerical Study of Generalized Multiquadric Radial Basis Function Interpolation. SIAM Undergraduate Online, 2: 58 – 70.
- Fasshauer, G. E. (2007). Meshfree Approximation Methods with MATLAB. World Scientific Publishing, USA. 1st Edition. 489pp.
- Hardy, R. L. (1971). Multiquadric Equations of Topography and Other Irregular Surfaces. Journal of Geophysical Research, 76(8): 1905 – 1915.
- Iske, A. (2003). Radial Basis Functions: Basic Advanced Topics and Meshfree Methods for Transport Problems. Rendicoti Del Seminaro Matematical, 61(3):247-286.
- Issa, K., Hambali, S. M. and Biazar, J. (2020). Am Algorithm for Choosing Best Shape Parameter for Numerical Solution of Partial Differential Equation via Inverse Multiquadric Radial Basis Function. Open Journal of Mathematical Sciences, 4: 147 – 157.
- Kansa, E. J. (1990). Multiquadrics – A Scattered Data Approximation Scheme with Applications to Computational Fluid- Dynamics Solutions to Parabolic, Hyperbolic and Elliptic Partial Differential Equations. Computers and Mathematics with Applications, 19(8): 147 – 161.
- Kazem, S. and Hatam, A. (2017). A Modification on Strictly Positive Definite RBF-DQ Method Based on Matrix Decomposition. Engineering Analysis with Boundary Elements, 76: 90 – 98.
- Luga, T. and Alechenu, B. (2019). Numerical Solutions of Some Second Order Partial Differential Equations Using Non-Standard Exponent Values of the Generalized Multiquadric Radial Basis Function. IOSR Journal of Mathematics, 15(3): 38 – 46.
- Luga, T., Aboiyar, T. and Adee, S. O. (2019). Radial Basis Function Methods for Approximating the Two-Dimensional Heat Equation. International Journal of Engineering applied Sciences and Technology, 4(2): 7 – 15.
- Madych, W. R. and Nelson, S. A. (1988). Multivariate Interpolation and Conditionally Positive Definite Functions, Approx. Theory Appl. 4: 77–89.
- Madych, W. R. and Nelson, S. A. (1990). Multivariate Interpolation and Conditionally Positive Definite Functions II, Mathematics of Computation 54: 211–230.
- Meinguet, J. (1979). Multivariate Interpolation at Arbitrary Points Made Simple, Z. Angew. Math. Phys. 30: 292–304.
- Micchelli, C. A. (1984). Interpolation of scattered data: distance matrices and conditionally positive definite functions. In Approximation theory and spline functions, Springer, Dordrecht, 8: 143-145.
- Misra, R. K. and Kumar, S. (2013). Multiquadric Radial Basis Function Method for Boundary Value and Free Vibration Problems. Indian Journal of Industrial and Applied Mathematics, 4(1): 1 – 4.
- Rippa, S. (1999). An Algorithm for Selecting a Good Value for the Parameter c in Radial Basis Function Interpolation. Advances in Computational Mathematics, 11(2): 193 – 210.
- Steffensen, J. F. (2006). Interpolation. 2nd Edition. Mineola. ISBN: 978-0-486-15483.