« Home « Kết quả tìm kiếm

Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery


Tóm tắt Xem thử

- Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery.
- Low-rank tensor recovery (LRTR) Mode-k 1 k 2 tensor unfolding Tensor N-tubal rank.
- To tackle these two issues, we define a new tensor unfolding operator, named mode-k 1 k 2 tensor unfolding, as the process of lexicographically stacking all mode-k 1 k 2 slices of an N-way tensor into a three-way tensor, which is a three-way exten- sion of the well-known mode-k tensor matricization.
- On this basis, we define a novel ten- sor rank, named the tensor N-tubal rank, as a vector consisting of the tubal ranks of all mode-k 1 k 2 unfolding tensors, to depict the correlations along different modes.
- To effi- ciently minimize the proposed N-tubal rank, we establish its convex relaxation: the weighted sum of the tensor nuclear norm (WSTNN).
- The key to tensor recovery is to explore the redundancy prior of the underlying tensor, which is usually formulated as low-rankness.
- A conclusive issue of LRTR is the definition of the tensor rank.
- However, unlike the matrix rank, the definition of the tensor rank is not unique.
- The CP rank and the Tucker rank are the two most typical definitions of the tensor rank.
- Although the measure of the CP rank is consistent with that of the matrix rank, it is difficult to establish a solvable relaxation form.
- The Tucker rank is defined as a vector, the k-th element of which is the rank of the mode-k unfolding matrix [20], i.e.,.
- Thus, the SNN faces dif- ficulty in preserving the intrinsic structure of the tensor.
- As a generalization of the matrix singular value decomposition (SVD), t-SVD regards a three- way tensor X as a matrix, each element of which is a tube (mode-3 fiber), and then decomposes X as.
- Specifically, the tensor tubal rank is equal to the maximum value of the tensor multi-rank.
- However, most real-world data always have differ- ent correlations along different modes, e.g., the correlation of an HSI along its spectral mode should be much stronger than those along its spatial modes.
- To apply t-SVD to N-way tensors (N P 3), in this paper, we define a three-way extension of the tensor matricization oper- ator, named mode-k 1 k 2 tensor unfolding (k 1 <.
- To characterize the correlations along different modes in a more flexible manner, we propose a new tensor rank, named the tensor N-tubal rank, which is a vector consisting of the tubal ranks of all mode-k 1 k 2 unfolding tensors, i.e.,.
- Table 1 compares the Tucker rank and the N-tubal rank of two HSIs.
- To efficiently minimize the proposed tensor N-tubal rank, we establish its convex relaxation: the weighted sum of the tensor nuclear norm (WSTNN), which can be expressed as the weighted sum of the TNN of each mode-k 1 k 2 unfolding tensor, i.e.,.
- Numerous numer- ical experiments on synthetic and real-world data are conducted to illustrate the effectiveness and efficiency of the proposed methods..
- Section 3 gives the defi- nitions of the tensor N-tubal rank and its convex surrogate WSTNN.
- Section 5 evaluates the performance of the proposed models and compares the results with those of state-of-the-art competing methods.
- 1 The rank is approximated by the numbers of singular values larger than 1% of the largest ones..
- The conjugate transpose of a three-way tensor X 2 R n 1 n 2 n 3 , denoted as X T , is the tensor obtained by conjugate transposing each of the frontal slices and then reversing the order of transposed frontal slices 2 through n 3 .
- Data Size Tucker rank N-tubal rank.
- Now, we give the definitions of the tensor multi-rank and tubal rank..
- The tensor multi-rank of X is a vector rank m ð Þ 2 X R n 3 , the i-th element of which is the rank of the i-th frontal slice of X, where X ¼ fft ð X .
- The tensor nuclear norm of a tensor X 2 R n 1 n 2 n 3 , denoted as kXk TNN , is defined as the sum of the singular values of all the frontal slices of X, i.e.,.
- Tensor N -tubal rank and convex relaxation.
- In this section, we first propose the mode-k 1 k 2 tensor unfolding operation and then give the definitions of the tensor N- tubal rank and its convex relaxation WSTNN..
- Illustration of the t-SVD of an n 1 n 2 n 3 tensor..
- the frontal slices of which are the lexicographic orderings of the mode-k 1 k 2 slices of X..
- 2 R N N1 ð Þ=2 : Clearly, for a three-way tensor, the tensor tubal rank is the first element of the tensor N-tubal rank.
- Specifically, the proposed N-tubal rank combines the advantages of the Tucker rank and tubal rank.
- On the one hand, compared with the mode-k 1 unfolding matrix, the mode-k 1 k 2 unfolding tensor avoids the destruction of the structure infor- mation along the k 2 -th mode.
- 2, the tubal rank of each mode-k 1 k 2 unfolding (permuta- tion) tensor X ð k 1 k 2 Þ more directly depicts the correlation of the k 1 -th and the k 2 -th modes, i.e., it lacks direct characterization of the correlation along other modes.
- The following theorem reveals the relationship between the tensor N-tubal rank and Tucker rank..
- Illustration of the low N-tubal rank prior of an HSI.
- (e) Singular value curves of the first frontal slices of X ð k 1 k 2 Þ.
- Then, each element of the N-tubal rank is bounded by the Tucker rank along the corresponding modes, i.e.,.
- tubal Rank X ð k 1 k 2 Þ.
- Then, the N-tubal rank of X is at most r ones N N ð ð 1 Þ = 2 .
- The WSTNN of an N-way tensor X 2 R n 1 n 2 n N , denoted as kXk WSTNN , is defined as the weighted sum of the TNN of each mode-k 1 k 2 unfolding tensor, i.e.,.
- For the choice of the weight a , we consider the following three cases..
- Case 1: The tensor N-tubal rank of the underlying tensor is unknown and cannot be estimated empirically, such as the case of MRI data.
- Case 2: The tensor N-tubal rank of the underlying tensor X 2 R n 1 n 2 n N is known, i.e., N rank t ð Þ ¼ X ð r 11 .
- Since a k 1 k 2 stands for the contribution of the TNN of the mode-k 1 k 2 unfolding tensor X ð k 1 k 2 Þ , the value of a k 1 k 2 should be dependent on the tubal rank of X ð k 1 k 2 Þ (r k 1 k 2 ) and the size of the first two modes of X ð k 1 k 2 Þ n k 1 andn k 2.
- This implies that the value of the first element of the N-tubal rank should be much larger than the values of its second and third elements.
- Considering an N- way tensor X 2 R n 1 n 2 n N , the proposed WSTNN-based LRTC model is formulated as.
- Algorithm 2 ADMM-based optimization algorithm for the proposed WSTNN-based LRTC model (12)..
- Within the framework of the ADMM, Y k 1 k 2 .
- The pseudocode of the developed algorithm is described in Algorithm 2..
- We analyse the computational complexity of the developed algorithm, which involves three subproblems, i.e., the Y k 1 k 2.
- Considering an N-way tensor X 2 R n 1 n 2 n N , the proposed WSTNN-based TRPCA model can be formulated as.
- Algorithm 3 ADMM-based optimization algorithm for the proposed WSTNN-based TRPCA model (22)..
- The pseudocode of the proposed algorithm for solving the proposed WSTNN-based TRPCA model (22) is described in Algorithm 3..
- We analyse the detailed computational complexity of the developed algorithm, which involves five subproblems, i.e., the Z k 1 k 2 subproblems, the L subproblem, the E subproblem, the P k 1 k 2 subproblem, and the M subproblems.
- We evaluate the performance of the proposed WSTNN-based LRTC and TRPCA methods.
- We employ the peak signal-to-noise rate (PSNR), the structural similarity (SSIM) [33], and the feature similarity (FSIM) [41] to measure the quality of the recovered results.
- The compared LRTC meth- ods are as follows: HaLRTC [24] and LRTC-TVI [23], representing the state of the art for the Tucker-decomposition-based method.
- BCPF [44], representing the state of the art for the CP-decomposition-based method.
- and logDet [14], TNN [43], PSTNN [16], and t-TNN [12], representing the state of the art for the t-SVD-based method.
- Table 2 shows the parameter settings for the proposed WSTNN-based LRTC method on dif- ferent data..
- The tested synthetic tensors consist of the sum of r rank-one tensors, which are generated by finding the vector outer product on N (N ¼ 3 or 4) random vectors.
- In practice, the data in each test are regenerated and confirmed to meet the conditions of Theorem 3, i.e., the N-tubal rank is r ones ð N N ð 1 Þ = 2 .
- We define the success rate as the ratio of successful times to the total number of times, where one test is successful if the relative square error of the recovered tensor X ^ and the ground-truth tensor X, i.e., k X Xk ^ 2 F = kXk 2 F , is less than 10 3.
- We test data with different N-tubal ranks and sampling rates (SRs), which is defined as the proportion of the known ele- ments.
- 3 The codes of the WSTNN-based LRTC and TRPCA methods are available at https://yubangzheng.github.io/..
- Table 3 lists the mean values of the PSNR, SSIM, and FSIM for all 32 MSIs recovered by different LRTC methods.
- Table 4 lists the values of the PSNR, SSIM, and FSIM of the tested MRI recovered by the different LRTC methods.
- Table 5 lists the values of the PSNR, SSIM, and FSIM of the tested CV recovered by different LRTC methods.
- The left two are the results of the TNN-based LRTC method [43] and the proposed WSTNN-based LRTC method on three-way tensors.
- The right two are the results of the TNN-based LRTC method [43] and the proposed WSTNN-based LRTC method on four-way tensors.
- Parameter settings of the proposed WSTNN-based LRTC method on different data..
- observed, the proposed method has an overall better performance than that of the compared ones with respect to all evaluation indices.
- Table 6 lists the values of the PSNR, SSIM, and FSIM of the tested HSV recovered by different LRTC methods.
- The completion results of the MRI data with SR ¼ 20%.
- In this section, we evaluate the performance of the proposed WSTNN-based TRPCA method by synthetic data and HSI denoising.
- Table 8 lists the PSNR, SSIM, and FSIM values of the tested HSIs recovered by different methods.
- The completion results at the 49-th frame of the CV news with SR ¼ 10%..
- As observed, our WSTNN-based TRPCA method achieves the best visual results among those of the three compared methods in terms of both noise removal and detail preservation..
- In this section, we discuss the effects of the threshold parameter s and the convergence of the proposed ADMM in the proposed LRTC and TRPCA problems.
- Effects of the threshold parameter.
- N-tubal rank.
- The left two are the results of the TNN-based TRPCA method [25] and the proposed WSTNN-based TRPCA method on three-way tensors.
- The right two are the results of the TNN-based TRPCA method [25] and the proposed WSTNN-based TRPCA method on four-way tensors.
- Parameter settings of the proposed WSTNN-based TRPCA method on different data..
- 10 Þ), the singular values obtained after performing the t-SVT (in Theorem 4) contain corrupted information, which is not consis- tent with the low-rank prior of the underlying tensor.
- Owing to the use of the ADMM framework and the convexity of the objective functions, the con- vergence of the two developed algorithms is guaranteed theoretically.
- To illustrate the effectiveness of the proposed N-tubal rank and WSTNN, we applied the WSTNN to two typical LRTR prob- lems, i.e., LRTC and TRPCA problems, and proposed the WSTNN-based LRTC and TRPCA models.
- The numerical results demonstrated that the proposed method effectively exploits the correlations along all modes while preserving the intrinsic structure of the underlying tensor..
- One chal- lenging example is the missing slice problem, which usually results in observed data with a lower N-tubal rank than that of the original data.
- Second, we plan to establish some nonconvex relaxations to further improve the performance of the proposed method.
- The denoising results of the HSIs Washington DC Mall and Pavia University with NL ¼ 0:4.
- The completion results of the HSIs Washington DC Mall and Pavia University with SR ¼ 5%.
- Thus, the tubal rank of X ð k 1 k 2 Þ (the ð k 1 .
- k 2 Þ-th element of the N-tubal rank of X) is at most r, and it is equal to r if the aforementioned conditions are satisfied..
- Table 9 lists the values of the PSNR, SSIM, and FSIM of these two tested HSIs recovered by different LRTC methods.
- 12 shows the PSNR, SSIM and FSIM values of each band of the recovered HSI Washington DC Mall obtained by the eight compared LRTC methods with SR ¼ 5.
- The PSNR, SSIM, and FSIM values of each band of the recovered HSI Washington DC Mall output by the eight LRTC methods with SR ¼ 5%.

Xem thử không khả dụng, vui lòng xem tại trang nguồn
hoặc xem Tóm tắt