Support vector machines (SVMs) with the l(1)-penalty became a standard tool in the analysis of high-dimensional classification problems with sparsity constraints in many applications, including bioinformatics and signal processing. We give non-asymptotic results on the performance of l(1)-SVM in identification of sparse classifiers.
We show that an N-dimensional s-sparse classification vector can be (with high probability) well approximated from only O(s log(N)) Gaussian trials. We derive similar estimates also in the presence of misclassifications and for the so-called doubly regularized SVM, which combines the l(1)-and the l(2)-penalty.
Similar bounds were obtained earlier in the analysis of LASSO and 1-Bit compressed sensing.