Is GLMNET as a whole outclassed by SVM? Are there no exceptions?

The paper 'A Reduction of the Elastic Net to Support Vector: Machines with an Application to GPU Computing' 'demonstrate on twelve real world data sets, that [SVM] yields identical results as
the popular (and highly optimized) glmnet implementation but is one or several orders of magnitude faster' with a GPU.

I'm unsure if there is another paper that looks at exceptions of this. Are there any exceptions?

This topic was automatically closed 42 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.