Abstract:
The rank of the sparse signals brought by multiple measurement vectors (MMV) augments the performance of joint sparse recovery. In general, suppose the sparsity level k i...Show MoreMetadata
Abstract:
The rank of the sparse signals brought by multiple measurement vectors (MMV) augments the performance of joint sparse recovery. In general, suppose the sparsity level k is less than or equal to [rank(X)+spark(A)-1]/2, the sparsest solution of the MMV problem is unique and recoverable via various methods. It is shown in this letter that the unique solution of the sparsity level k up to spark(A)-1 actually exists in a measure theoretical point of view. More specifically, even when [rank(X)+spark(A)-1]/2 ≤ k <; spark(A), the sparsest solution to AX=Y is still unique with full Lebesgue measure in every k-sparse coordinate space. This phenomenon is fully confirmed by the MMV tail-l2,1 minimization technique. Furthermore, the phenomenon that the traditional l2,1 minimization actually fails to recover X with k ≥ [spark(A)-1]/2 is investigated from the same perspective of measure theory. Extensive numerical tests conducted by the MMV tail-l2,1 minimization and l2,1 minimization are demonstrated to confirm the findings. The tail minimization procedure exhibits the most prominent effectiveness for the larger sparsity levels among all known techniques.
Published in: IEEE Signal Processing Letters ( Volume: 28)