« Home « Kết quả tìm kiếm

Multivariate approximation by translates of the Korobov function on Smolyak grids


Tóm tắt Xem thử

- Multivariate approximation by translates of the Korobov function on Smolyak grids.
- We also provide lower bounds of the optimal approximation on the best choice of ϕ..
- Translates of the Korobov function.
- The d-dimensional torus denoted by T d is the cross product of d copies of the interval [0, 2π] with the identification of the end points.
- We are interested in approximations of functions from the Korobov space K p r ( T d ) by arbitrary linear combinations of n arbitrary shifts of the Korobov function κ r,d defined below.
- because, when j = (j l : l ∈ N [d]) we have that κ r,d (x.
- Remark 1.3 For 1 ≤ p.
- 1/p, we have the embedding K p r ( T d.
- See the proof of the embedding K p r ( T.
- Specifically, if we denote the one-periodic extension of the Bernoulli polynomial as B ¯ n then for t ∈ T , we have that.
- This means, for every function f ∈ K 2 r ( T d ) and x ∈ T d , we have that.
- The linear span of the set of functions {κ r,d.
- This unresolved question is the main motivation of this paper and we begin to address it in the context of the Korobov space K 2 r ( T d.
- 1/2 by linear combinations of n translates of the reproducing kernel, namely, κ r,d.
- 1, because the linear span of the set of functions {κ r,d.
- We are interested in the approximation in L p ( T d )-norm of all functions f ∈ W by arbitrary linear combinations of n translates of the function ϕ, that is, the functions ϕ.
- y l ∈ T d and measure the error in terms of the quantity.
- The aim of the present paper is to investigate the convergence rate, when n.
- of the quantity.
- we write a l ≪ b l provided there is a positive constant c such that for all l ∈ N , we have that a l ≤ cb l .
- Theorem 1.5 If 1 <.
- 1/2, we have that.
- In Section 2, we give the necessary background from Fourier analysis, construct methods for approximation of functions from the univariate Korobov space K p r ( T ) by linear combinations of translates of the Korobov function κ r and prove an upper bound for the approximation error.
- Finally, in Section 4, we provide the proof of the Theorem 1.5..
- Lemma 2.1 If 1 <.
- 0, then there exists a positive constant c such that for any m ∈ N , f ∈ K p r ( T ) and g ∈ L p ( T d ) we have.
- kf − S m (f )k p ≤ c m −r kf k K p r ( T ) (2.1) and.
- The main purpose of this section is to introduce a linear operator, denoted as Q m , which is constructed from the m-th Fourier projection and prescribed translate of the Korobov function κ r , needed for the proof of Theorem 1.5.
- Theorem 2.3 If 1 <.
- we have that.
- The idea in the proof of Theorem 2.3 is to use Lemma 2.1 and study the function defined as F m.
- Therefore, the proof of Theorem 2.3 hinges on obtaining an estimate for L p ( T )-norm of the function F m .
- That is, we have that T m.
- Lemma 2.4 If m, n, s ∈ N , such that m + n <.
- Lemma 2.4 is especially useful to us as it gives a convenient representation for the function F m .
- For the confirmation of (2.6) we use the fact that S m is a projection onto T m , so that S m (κ r ∗ g.
- Now, we use Lemma 2.4 with f 1 = S m (g), f 2 = S m (κ r ) and s = 2m + 1 to confirm both (2.5) and (2.6)..
- Lemma 2.5 If 1 <.
- then there exist positive constants c and c ′ such that for any m ∈ N and f ∈ T m there hold the inequalities.
- Remark 2.6 Lemma 2.5 appears in [28] page 28, Volume II as Theorem 7.5.
- We also remark in the case that p = 2 the constants appearing in Lemma 2.5 are both one.
- Indeed, we have for any f ∈ T m the equation.
- Lemma 2.7 If 1 <.
- a j f ˆ (j)χ j belongs to L p ( T ) and, moreover, we have that.
- Moreover, according to equation (2.8), we have for every j ∈ Z that.
- On the other hand, by definition we have S m (g).
- Therefore, using equations (2.9) and (2.10) we can compute.
- By using (2.10) and (2.11) we split the function G m,j into two functions as follows.
- Now, we shall use Lemma 2.7 to estimate kG + m,j k p and kG − m,j k p .
- For a fixed value of j and m, we observe that the components of the vector a are decreasing with regard to |l| and moreover, it is readily seen that a 0 ≤ |jm| −r .
- Indeed, we successively use the H¨ older inequality, the upper bound in Lemma 2.5 applied to the function S m (g) and the inequality (2.2) to obtain the desired result..
- Remark 2.9 The restrictions 1 <.
- 1 in Theorem 2.3 are necessary for applying the Marcinkiewicz multiplier theorem (Lemma 2.7) and processing the upper bound of kF m k p .
- From the definition of the function F m we conclude that.
- and (2.7) we finally get that kF m k 2 2 ≪ m −2r kgk 2 2 = m −2r kf k 2 K r.
- This definition is adequate since the operators Q m l and Q m l ′ commute for different l and l.
- Lemma 3.1 If 1 <.
- 1/2 when p = 2, d is a positive integer and V is a nonempty subset of N [d], then there exists a positive constant c such that for any m = (m j : j ∈ N d ) and f ∈ K p r ( T d ) we have that.
- Therefore, by Theorems 2.3 and 2.10 there is a positive constant c such that the integral operator W r,m : L p ( T.
- Thus, we have x = (x 1 , y), y ∈ R d−1 and also t = (t 1 , v), v ∈ T d−1 .
- By Theorems 2.3 and 2.10 we are assured that there is a positive constants c 1 such that for all y ∈ T d−1 we have that.
- (3.3) We must bound the integral appearing on the right hand side of the above inequality.
- Hence, in either case, H¨ older inequality ensures there is a positive constant c 2 such that for all x, t ∈ T d we have that.
- The remainder of the proof proceeds in a similar way.
- We require a convenient representation for the function appearing in the left hand side of the inequality (3.1).
- This is the essential formula that yields the proof of the lemma.
- We merely use the operator inequality (3.2) and the method followed above for the case s = 1 to complete the proof..
- The proof of the next lemma follows directly from Lemma 3.2 and equation (3.1)..
- Lemma 3.2 If 1 <.
- m = (m j : j ∈ N d ) we have that.
- Lemma 3.3 If 1 <.
- In the next result, we provide a decomposition of the space K p r ( T d.
- we say that l ≺ k provided for each j ∈ N [d] we have that l j ≤ k j .
- Theorem 3.4 If 1 <.
- we have that kR k (f )k p ≤ c 2 −r|k| kf k K r.
- According to equation (3.5) we have for any f ∈ K p r ( T d ) that Q 2 k (f.
- then by Lemma 3.2 we see that the left hand side of equation (3.7) goes to f in the L p ( T d )-norm.
- l∈N [d] R k l and for l ∈ N [d] we have that R k l = T k l −1 − T k l , we obtain that R k = X.
- and so by Lemma 3.3 there exists a positive constant c such that for every f ∈ K p r ( T d ) and k ∈ Z d.
- Theorem 3.5 If 1 <.
- From Theorem 3.4 we deduce that there exists a positive constant c (perhaps different from the constant appearing in the Theorem 3.4) such that for every f ∈ K p r ( T d ) and k ∈ Z d.
- we have that kf − P m (f )k p = k X.
- Moreover, by the definition of the linear operator P m given in equation (3.8) we see that the range of P m is contained in the subspace.
- Theorem 4.1 If 1 <.
- n, then n ≍ 2 m m d−1 and by Theorem 3.5 we have that.
- Next, we prepare for the proof of the lower bound of (1.3) in Theorem 1.5.
- We choose a polynomial vector field p : R l → R m such that each component of the vector field p is in P q ( R l.
- That is, we have that.
- For a proof of the following lemma see [17]..
- Theorem 4.4 If r >.
- 1/2, then we have that.
- we have that | H (a.
- To apply Lemma 4.2, we choose for any n ∈ N , q = ⌊n(log n) −d+2.
- We choose any ϕ ∈ L 2 ( T d ) and let v be any function formed as a linear combination of n translates of the function ϕ.
- Thus, for some real constants c j ∈ R and vectors y j ∈ T d , j ∈ N [n] we have that.
- We now introduce a polynomial manifold so that we can use Lemma 4.2 to get a lower bound for the expressions on the left hand side of inequality (4.4).
- Therefore, by Lemma 4.2 and (4.3) we conclude there is a vector t 0 = (t 0 k : k ∈ H (q.
- for which there is a positive constant c such that for every v of the form v = P.
- y j ) we have that.
- [19] C.A.Micchelli and G.Wahba, Design problems for optimal surface interpolation, in Approximation Theory and Applications, Z.Ziegler (ed