International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-Today
November 2024 Issue : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th December 2024
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th November 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Local Approximation Order of Generalized Multiquadric Radial Basis Functions Interpolation

Local Approximation Order of Generalized Multiquadric Radial Basis Functions Interpolation
B. V. Iyorter
Department of Mathematics and Computer Science, University of Mkar, Mkar, Nigeria.
Received: 10 July 2024; Revised: 24 July 2024; Accepted: 27 July 2024; Published: 11 August 2024

ABSTRACT

This paper discusses local interpolation by the generalized multiquadric radial basis functions. The convergence rate for local scattered data interpolation by the generalized multiquadric radial basis function has been presented. The multiquadric interpolant is presented in Lagrange form and used to prove the convergence of the multiquadric interpolation.

INTRODUCTION

Data interpolation is a technique used in data analysis to estimate values between known data points. It involves filling in missing data points or estimating values at points where data is not directly measured. The aim of interpolation is to create a continuous representation of a dataset, which can be useful for various purposes like smoothing out irregularities in data, creating visualizations, and making predictions (Amidror, 2002). Data interpolation can be global or local. Local interpolation uses a sample of known data points to estimate the unknown value. It fits the specified order polynomial using points only within the defined neighbourhood. Local interpolation techniques apply a single mathematical function repeatedly to subsets of the total set of observed data points, then link these regional surfaces to create a composite surface covering the whole study area (Steffensen, 2006).
Scattered data interpolation, especially on irregular domains or higher dimensional geometry, is an important problem in science and engineering. Methods such as trigonometric, algebraic, and spline interpolations have been employed for a wide range of problems but have not been so efficient in higher dimensions or scattered nodes problems on irregular domains. Radial basis function interpolation is an alternative method for such problems (Chen and Cao, 2021; Kazem and Hatam, 2017).
The origin of radial basis functions can be traced back to Hardy (1971) when he introduced RBF multiquadric to solve surface fitting on topography and irregular surfaces. Thorough investigation by Hardy (1971) led to the discovery of the multiquadric. Hardy’s multiquadric interpolation scheme was unnoticed till 1979 when a mathematician, Richard Franke compared various methods of solving the scatter data interpolation problem which he found Hardy’s multiquadric method to be the most impressive. It is consistently best or near best in terms of accuracy, and always results in visually pleasant surfaces. He found also that the system matrix of the method was invertible, and the method was well posed (Madych and Nelson, 1988; Micchelli, 1984; Meinguet, 1979).
Micchelli (1984) developed the theory behind the multiquadric method. He proved that the system matrix for the multiquadric method was invertible. Kansa (1990) was the first to apply the multiquadric method to solve differential equations. Madych and Nelson (1990) showed the spectral convergence rate of multiquadric interpolation. Since Kansa’s discovery, research in RBF methods, and particularly the multiquadric has grown rapidly. Recently, RBF methods have gained attention in scientific computing and engineering applications such as function interpolation, numerical solutions to partial differential equations, and multivariate scattered data processing. The main advantage of this method are spectral convergence rates that can be achieved using infinitely smooth functions, geometric flexibility, and ease of implementation (Ara ̀ndiga et al., 2020; Rippa, 1999).
The generalized multiquadric RBFs take the form

<p>The function \( \phi(r) = (1 + \epsilon r^2)^\beta \) where \( \beta = \dots, -\frac{3}{2}, -\frac{1}{2}, \frac{1}{2}, \frac{3}{2}, \dots \)</p>

<p>Given data points \( x_j \) where \( j = 1, 2, \dots, n \), the interpolant \( s(x) \) satisfies \( s(x_j) = f(x_j) \).</p>

The generalized multiquadric RBF covers a wide range of infinitely differentiable RBFs including the Hardy’s multiquadric function with

\(\beta = \frac{1}{2}\)

, the inverse multiquadric function with

\(\beta = -\frac{1}{2}\)

, and the inverse quadrics with

\(\beta = -1\)

. For convenience, we will refer to multiquadric functions with

\(\beta > 0\)

as the generalized multiquadric functions, while those with

\(\beta < 0\)

as the generalized inverse multiquadric (GIMQ) functions.

For

\(\beta < 0\)

, the GIMQ function is strictly positive definite, and for

\(0 < \beta < 1\)

, the generalized multiquadric function is conditionally positive definite of order one. In either case, the system matrix for the interpolation problem is invertible. For

\(\beta > 1\)

, the generalized multiquadric function is conditionally positive definite of order

\(\lceil \beta \rceil\)

(where \(\lceil \cdot \rceil\) denotes the ceiling function), and to show that the system matrix is invertible, it is necessary to attach low order polynomials to the RBF interpolant.

In this paper, we will concentrate on generalized multiquadric RBFs of the form

\(\phi(r) = \left(1 + \epsilon r^2\right)^\beta\)

, where

\(\beta = \frac{1}{2}, \frac{3}{2}, \frac{5}{2}, \ldots\)

.

RADIAL BASIS FUNCTION INTERPOLATION

Suppose we are given data in the form \( (x_i, f_i) \) where \( f_i = f(x_i) \), \( i = 1, 2, \dots, n \). Our goal is to find an interpolant \( s(x) \), \( x \in \mathbb{R}^d \) satisfying

\[
s(x_i) = f_i \quad \text{for } i = 1, 2, \dots, n.
\]

For a strictly positive definite radial basis function interpolant, it is required that \( s(x) \) be a linear combination of translates of \( \phi(x) \), that is

\[
s(x) = \sum_{i=1}^n c_i \phi(x – x_i), \quad x \in \mathbb{R}^d.
\]

For a conditionally positive definite RBF interpolant, a low-order polynomial \( p \in \Pi_{m-1}(\mathbb{R}^d) \) is added to the equation above. This means that we let \( s(x) \) have the form

\[
s(x) = \sum_{i=1}^n c_i \phi(x – x_i) + \sum_{j=1}^q d_j p_j(x), \quad x \in \mathbb{R}^d,
\]

where \( \Pi_{m-1}(\mathbb{R}^d) \) is the space of polynomials from \( \mathbb{R}^d \) to \( \mathbb{R} \) of degree at most \( m-1 \), \( q \in \mathbb{N} \) is the degree of \( \Pi_{m-1}(\mathbb{R}^d) \), \( \| \cdot \| \) is the Euclidean norm, and \( \phi \) is the radial basis function with the additional constraints

\[
\sum_{i=1}^n c_i p_j(x_i) = 0, \quad j = 1, 2, \dots, q.
\]

Adding the extra constraints and the polynomial conditions to the interpolant, we have the system of linear equations

\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix},
\]

where \( \Phi = (\phi(x_i – x_j))_{1 \leq i, j \leq n} \), \( P \) is the matrix associated with the polynomial terms, \( c \) and \( d \) are coefficient vectors, and \( f \) is the vector of function values.

GENERALIZED MULTIQUADRIC RADIAL BASIS FUNCTION INTERPOLATION

To extend the concept of radial basis function interpolation for finite volume methods, we generalize the concept of interpolation. Let \( \Omega = \{ \lambda_1, \lambda_2, \dots, \lambda_n \} \) denote a set of linear differentials, and \( \{ f_1, f_2, \dots, f_n \} \in \mathbb{R} \) are certain given function values. Then the RBF approximation augmented by a polynomial is given by

\[
s(x) = \sum_{i=1}^n c_i \lambda_i(y) \phi(x – y) + \sum_{j=1}^q d_j p_j(x),
\]

where \( \lambda_i(y) \) denotes the functional applied to \( \phi(x – y) \) as a function of \( y \) with \( x \) fixed. The interpolation problem is

\[
\lambda_i(s) = \lambda_i(f), \quad i = 1, 2, \dots, n,
\]

where \( \lambda_i(f) = f_i \) for \( i = 1, 2, \dots, n \). Since the problem is underdetermined, we add the constraints

\[
\sum_{j=1}^n c_j \lambda_i(p_j) = 0, \quad j = 1, 2, \dots, q.
\]

In matrix terms, the problem is to find a solution to the system of equations

\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = \begin{pmatrix} f \\ 0 \end{pmatrix}.
\]

LAGRANGE REPRESENTATION OF INTERPOLANT

Sometimes it is useful to work with the Lagrange representation of the interpolant

\[
s(x) = \sum_{i=1}^n L_i(x) f(x_i),
\]

where the Lagrange basis functions \( L_1(x), \dots, L_n(x) \) satisfy

\[
L_i(x_j) = \delta_{ij}, \quad \text{where } \delta_{ij} \text{ is the Kronecker delta}.
\]

 

<p>For a fixed point \( x \in \mathbb{R}^d \), the vector \( L(x) = [L_1(x), \dots, L_n(x)]^T \in \mathbb{R}^n \) and \( \mu(x) = [\mu_1(x), \dots, \mu_n(x)]^T \in \mathbb{R}^n \) are the unique solution of the linear system</p>

\[
\begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix} \begin{pmatrix} L(x) \\ \mu(x) \end{pmatrix} = \begin{pmatrix} R(x) \\ S(x) \end{pmatrix}
\]

<p>where \( R(x) = [\phi(x – x_j)]_{j=1, \dots, n} \in \mathbb{R}^n \) and \( S(x) = [p_1(x), \dots, p_m(x)] \in \mathbb{R}^n \).</p>

<p>We can write the above linear system as \( A \nu(x) = b(x) \), letting</p>

\[
A(x) = \begin{pmatrix} \Phi & P \\ P^T & 0 \end{pmatrix}, \quad \nu(x) = \begin{pmatrix} L(x) \\ \mu(x) \end{pmatrix}, \quad b(x) = \begin{pmatrix} R(x) \\ S(x) \end{pmatrix}
\]

<p>Therefore</p>

\[
s(x) = L(x)^T f = \nu(x)^T f(X) = A^{-1} \cdot b(x) f(X) = b(x) \cdot A^{-1} \cdot f(X) = b(x) \cdot b
\]

<p>where \( \cdot \) denotes the inner product of the Euclidean space \( \mathbb{R}^d \) and where we let \( f(X) = [f_0] \in \mathbb{R}^{n+m} \) and \( b = [c^d] \in \mathbb{R}^{n+m} \).</p>

<h3>LOCAL APPROXIMATION ORDER OF THE GENERALIZED MULTIQUADRIC INTERPOLANT</h3>

<p>Following the approach of Iske (2003), the convergence rate of the generalized multiquadric RBF interpolation and the approximation order of the local multiquadric RBF interpolant is given.</p>

<p>Regarding the approximation order of the generalized multiquadric interpolant, we consider solving for some fixed point \( x_0 \in \mathbb{R}^d \) and any \( h > 0 \)</p>

\[
u(x_0 + h x_i) = s_h(x_0 + h x_i), \quad 1 \leq i \leq n
\]

<p>where \( X = \{x_1, \dots, x_n\} \subset \mathbb{R}^d \) is a \( \Pi_{m-1}(d) \)–unisolvent point set of moderate size that is \( n \) is small. Moreover, \( s_h \) denotes the unique generalized multiquadric interpolant of the form</p>

\[
s_h(x) = \sum_{i=1}^n c_i^h \phi_\beta(hx – h x_i) + \sum_{j=1}^q d_j p_j(hx)
\]

<p><strong>Definition 2:</strong> Let \( s_h \) denote the generalized multiquadric interpolant using \( \phi_\beta \) satisfying Eq. (3.2). We say that the approximation order of local generalized multiquadric interpolation at \( x_0 \in \mathbb{R}^d \) and with respect to the function space \( F \) is \( p \) if and only if for any \( u \in F \)</p>

\[
u(x_0 + h x) – s_h(x_0 + h x) = O(h^p), \quad h \to 0
\]

<p>holds for any \( x \in \mathbb{R}^d \) and any finite \( \Pi_{m-1}(d) \)–unisolvent point set \( X \subset \mathbb{R}^d \).</p>

<p>We let \( x_0 = 0 \). Now \( c^h = [c_1^h, \dots, c_n^h]^T \in \mathbb{R}^n \) and \( d^h = [d_1^h, \dots, d_q^h]^T \in \mathbb{R}^q \) can be found by solving the linear system</p>

\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} c^h \\ d^h \end{pmatrix} = u(hx_0)
\]

<p>where we let</p>

\[
A_h = [\phi_\beta(hx_i – h x_j)]_{1 \leq i, j \leq n} \in \mathbb{R}^{n \times n}, \quad P_h = [p_j(x_i)]_{1 \leq i \leq n, 1 \leq j \leq q} \in \mathbb{R}^{n \times q}, \quad u(hx) = [u(hx_i)]_{1 \leq i \leq n} \in \mathbb{R}^n
\]

<p>We can write the above system as \( A \cdot b^h = u^h \), that is for notational brevity we let</p>

\[
A_h = \begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix}, \quad b^h = \begin{pmatrix} c^h \\ d^h \end{pmatrix}, \quad u^h = u(hx_0)
\]

<p>We assume that the interpolant \( s_h \) has a Lagrange type representation</p>

\[
s_h(x) = \sum_{i=1}^n L_i^h(x) u(h x_i)
\]

<p>where</p>

\[
\sum_{i=1}^n L_i^h(x) p_j(x_i) = p_j(x) \quad \text{for all } p \in \Pi_{m-1}(d)
\]

<p>For \( x \in \mathbb{R}^d \), the vector \( L^h(hx) = [L_1^h(hx), \dots, L_n^h(hx)]^T \in \mathbb{R}^n \) together with \( \mu^h(hx) = [\mu_1^h(hx), \dots, \mu_q^h(hx)]^T \in \mathbb{R}^q \) is the unique solution of the linear system</p>

\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} L^h(hx) \\ \mu^h(hx) \end{pmatrix} = \begin{pmatrix} R_h(hx) \\ S_h(hx) \end{pmatrix}
\]

<p>where</p>

\[
R_h(hx) = [\phi_\beta(hx – h x_j)]_{1 \leq j \leq n} \in \mathbb{R}^n, \quad S_h(hx) = [p_1(hx), \dots, p_q(hx)] \in \mathbb{R}^q
\]

<p>and can be written as \( A_h \cdot \nu^h(hx) = b^h(hx) \).</p>

<p>Now</p>

\[
D^\alpha s_h(hx) = \sum_{i=1}^n D^\alpha L_i^h(hx) u(h x_i)
\]

<p>where \( D^\alpha L_i^h(hx) \) and \( D^\alpha \mu^h(hx) \) are the unique solution of the linear system</p>

\[
\begin{pmatrix} A_h & P_h \\ P_h^T & 0 \end{pmatrix} \begin{pmatrix} D^\alpha L^h(hx) \\ D^\alpha \mu^h(hx) \end{pmatrix} = \begin{pmatrix} D^\alpha R_h(hx) \\ D^\alpha S_h(hx) \end{pmatrix}
\]

<p>Let \( h = 1 \) then</p>

\[
\sum_{j=1}^n L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q \mu_k^1(x) p_k(x_j) = \phi(x – x_i)
\]

<p>and</p>

\[
\sum_{j=1}^n D^\alpha L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q D^\alpha \mu_k^1(x) p_k(x_j) = D^\alpha \phi(x – x_i)
\]

<p>Note that if</p>

\[
\phi(x-y) = (1 + \|x-y\|^2)^\beta
\]

\[
\phi(hx – hy) = (1 + h^2 \|x-y\|^2)^\beta = h^{2\beta}(1 + \|x-y\|^2)^\beta = h^{2\beta} \phi(x-y)
\]

<p>Then</p>

\[
\sum_{j=1}^n L_j^1(x) \phi(x_i – x_j) + \sum_{k=1}^q \mu_k^1(x) p_k(x_j) = \phi(x – x_i)
\]

<p>Let \( t(x-y) = \phi(x-y) – \phi(x-y) \) and</p>

\[
\sum_{j=1}^n L_j^1(x) t(x_i – x_j) = t(x_i – x)
\]

\[
h^{2\beta} \sum_{j=1}^n L_j^1(x) t(x_i – x_j) = h^{2\beta} t(x_i – x)
\]

<p>Adding the equations gives us</p>

\[
\sum_{j=1}^n L_j^1(x) h^{2\beta} \phi(x-y) + \sum_{k=1}^q h^{2\beta} \mu_k^1 p(x_i) = h^{2\beta} \phi(x-x_i)
\]

<p>Now if we define \( u_k^h \) as</p>

\[
u_k^h = h^{2\beta} \mu_k^1, \quad k = 1, 2, \dots, q
\]

<p>then for each \( x \in \mathbb{R}^d \) the vector \( L_j^1(x) \), where \( j = 1, 2, \dots, n \) and \( u_k^h \), where \( k = 1, 2, \dots, q \), solves the linear system in Eq. (3.3). Since the solution is unique, we have \( L_j^h(x) = L_j^1(x) \), where \( j = 1, 2, \dots, n \), or \( L_h(x) = L_1(x) \).</p>

<p>We now draw conclusions on the approximation order of the local multiquadric interpolation with respect to \( C^p \). To this end, let us consider \( u \in C^p \), the \( p \)th order Taylor polynomial of \( u \), and \( hx \) is given as</p>

\[
T_p(y) = \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (y-hx)^\alpha
\]

<p>where \( p = \beta \). By using</p>

\[
u(hx) = T_p(y) – \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (y-hx)^\alpha
\]

<p>so that</p>

\[
u(hx) = T_p(x_i) – \sum_{\alpha < p} \frac{1}{\alpha!} D^\alpha u(hx) (x_i-hx)^\alpha
\]

<p>This leads to</p>

\[
u(hx) – s_h(x) = \sum_{i=1}^n L_i^h(hx) T_p(x_i) – f(hx)
\]

<p>Now by our result, the Lebesgue constant</p>

\[
\Lambda = \sup_{h \to 0} \sum_{i=1}^n L_i^h(hx) = \sum_{i=1}^n L_i^1(hx)
\]

<p>is bounded so that</p>

\[
u(hx) – s_h(x) = O(h^p), \quad h \to 0
\]

<p>Therefore, the approximation order of the generalized multiquadric interpolant using \( \phi_\beta \) with respect to \( C^p \) is \( p \) where \( p = \beta \).</p>

CONCLUSION

The local approximation order of the generalized multiquadric radial basis function interpolation has been presented. Firstly, the generalized multiquadric radial basis function has been defined, then radial basis function interpolation has been described with particular interest to the generalized multiquadric radial basis function interpolation. The Lagrange representation of the interpolant is also presented and used to prove the convergence of the multiquadric interpolation.

REFERENCES

  1. Alipanah, A. (2016). Multiquadric Radial Basis Function Method for the Solution of Brachistochrone Problem. Romanian Journal of Mathematics and Computer Science, 6(2): 126 – 133.
  2. Amidror, I. (2002). Scattered Data Interpolation Methods for Electronic Imaging Systems: A Survey. Journal of Electronic Imaging, 11(2): 157 – 176.
  3. Ara ̀ndiga, F., Donat, R., Romani, L. and Rossini, M. (2020). On the Reconstruction of Discontinuous Functions Using Multiquadric RBF-WENO Local Interpolation Technique. Mathematics and Computers in Simulation, 176: 4 – 24.
  4. Bustamante, C. A., Flo ́rez, W. F., Power, H. and Giraldo, M. (2010). Control Volume-Radial Basis Function Method for Two-Dimensional Non-Linear Heat Conduction and Convection Problems. WIT Transactions on Modeling and Simulation, 50: 1 – 12.
  5. Chen, Y. and Cao, Y. (2021). A Coupled RBF Method for the Solution of Elastostatic Problems. Mathematics Problems in Engineering, 2021: 1 – 15.
  6. Chenoweth, M. E. and Sarra, S. (2009). A Numerical Study of Generalized Multiquadric Radial Basis Function Interpolation. SIAM Undergraduate Online, 2: 58 – 70.
  7. Fasshauer, G. E. (2007). Meshfree Approximation Methods with MATLAB. World Scientific Publishing, USA. 1st Edition. 489pp.
  8. Hardy, R. L. (1971). Multiquadric Equations of Topography and Other Irregular Surfaces. Journal of Geophysical Research, 76(8): 1905 – 1915.
  9. Iske, A. (2003). Radial Basis Functions: Basic Advanced Topics and Meshfree Methods for Transport Problems. Rendicoti Del Seminaro Matematical, 61(3):247-286.
  10. Issa, K., Hambali, S. M. and Biazar, J. (2020). Am Algorithm for Choosing Best Shape Parameter for Numerical Solution of Partial Differential Equation via Inverse Multiquadric Radial Basis Function. Open Journal of Mathematical Sciences, 4: 147 – 157.
  11. Kansa, E. J. (1990). Multiquadrics – A Scattered Data Approximation Scheme with Applications to Computational Fluid- Dynamics Solutions to Parabolic, Hyperbolic and Elliptic Partial Differential Equations. Computers and Mathematics with Applications, 19(8): 147 – 161.
  12. Kazem, S. and Hatam, A. (2017). A Modification on Strictly Positive Definite RBF-DQ Method Based on Matrix Decomposition. Engineering Analysis with Boundary Elements, 76: 90 – 98.
  13. Luga, T. and Alechenu, B. (2019). Numerical Solutions of Some Second Order Partial Differential Equations Using Non-Standard Exponent Values of the Generalized Multiquadric Radial Basis Function. IOSR Journal of Mathematics, 15(3): 38 – 46.
  14. Luga, T., Aboiyar, T. and Adee, S. O. (2019). Radial Basis Function Methods for Approximating the Two-Dimensional Heat Equation. International Journal of Engineering applied Sciences and Technology, 4(2): 7 – 15.
  15. Madych, W. R. and Nelson, S. A. (1988). Multivariate Interpolation and Conditionally Positive Definite Functions, Approx. Theory Appl. 4: 77–89.
  16. Madych, W. R. and Nelson, S. A. (1990). Multivariate Interpolation and Conditionally Positive Definite Functions II, Mathematics of Computation 54: 211–230.
  17. Meinguet, J. (1979). Multivariate Interpolation at Arbitrary Points Made Simple, Z. Angew. Math. Phys. 30: 292–304.
  18. Micchelli, C. A. (1984). Interpolation of scattered data: distance matrices and conditionally positive definite functions. In Approximation theory and spline functions, Springer, Dordrecht, 8: 143-145.
  19. Misra, R. K. and Kumar, S. (2013). Multiquadric Radial Basis Function Method for Boundary Value and Free Vibration Problems. Indian Journal of Industrial and Applied Mathematics, 4(1): 1 – 4.
  20. Rippa, S. (1999). An Algorithm for Selecting a Good Value for the Parameter c in Radial Basis Function Interpolation. Advances in Computational Mathematics, 11(2): 193 – 210.
  21. Steffensen, J. F. (2006). Interpolation. 2nd Edition. Mineola. ISBN: 978-0-486-15483.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

1

PDF Downloads

53 views

Metrics

PlumX

Altmetrics

Paper Submission Deadline

GET OUR MONTHLY NEWSLETTER