How NOT to perform feature selection!

Christos - Iraklis TsatsoulisData Science 9 Comments

Cross-validation (CV) is nowadays being widely used for model assessment in predictive analytics tasks; nevertheless, cases where it is incorrectly applied are not uncommon, especially when the predictive model building includes a feature selection stage.

I was reminded of such a situation while reading this recent Revolution Analytics blog post, where CV is used to assess both the feature selection process (using genetic algorithms) and the final model selection using the features previously selected. In summary, the procedure followed in the above post is:

    1. Select a number of “good” predictors, using the genetic algorithms (GA) method provided by the caret R package
    1. Using just this subset of predictors, build an SVM classifier
  1. Use cross-validation to estimate the unknown tuning parameters of the classifier, and to estimate the prediction error of the final model.

And we should be fine. Right?

Well, no…!

The above 3-step procedure vividly illustrates a recurring mistake when applying CV for model assessment with a feature selection stage in-between; consider the following excerpt from The Elements of Statistical Learning (Hastie et al., 2009):

From 'The Elements of Statistical Learning", p. 245

From The Elements of Statistical Learning, p. 245

The very title of the section above indeed suggests that there are some common “traps” when applying CV; Hastie et al. proceed to ask, Is this a correct application of cross-validation? And the answer turns out to be: no.

Why is that? Before delving into the statistical arguments, Hastie et al. provide an intuitive explanation:

Hastie2

Ibid.

As Hastie et al. explain in detail, the correct way in this case is to apply feature selection inside each one of the CV folds; you can watch a short video on the topic (“Cross-validation: right and wrong“) from their Statistical Learning MOOC (highly recommended), as well as a couple of relevant slides they have put together here.

Possibly the most highly cited reference on the issue, which leads to what we call selection bias, is a 2002 paper by Ambroise & McLachlan in the Proceedings of the National Academy of Sciences of the USA (open access – emphasis ours):

As explained above, the CV error of the prediction rule R obtained during the selection of the genes provides a too-optimistic estimate of the prediction error rate of R. To correct for this selection bias, it is essential that cross-validation or the bootstrap be used external to the gene-selection process. […]

In the present context where feature selection is used in training the prediction rule R  from the full training set, the same feature-selection method must be implemented in training the rule on the M − 1 subsets combined at each stage of an (external) cross-validation of R  for the selected subset of genes.

 The issue has been discussed several times since in the academic literature, with identical conclusions; see for example a 2006 paper by Varma & Simon in BMC Bioinformatics (open access):

However, CV methods are proven to be unbiased only if all the various aspects of classifier training takes place inside the CV loop. This means that all aspects of training a classifier e.g. feature selection, classifier type selection and classifier parameter tuning takes place on the data not left out during each CV loop. It has been shown that violating this principle in some ways can result in very biased estimates of the true error. One way is to use all of the training data to choose the genes that discriminate between the two classes and only change the classifier parameters inside the CV loop. This violates the principle that feature selection must be done for each loop separately, on the data that is not left out. As pointed out by Simon et al. [2], Ambroise and McLachlan [3] and Reunanen [4], this gives a very biased estimate of the true error; not much better than the resubstitution estimate. Over-optimistic estimates of error close to zero are obtained, even for data where there is no real difference between the two classes.

 Notice that the fact that in Step 1 of the above mentioned Revolution Analytics blog post the features themselves are selected via a separate CV procedure is irrelevant to the argument; essentially, “use all of the training data to choose the genes that discriminate between the two classes and only change the classifier parameters inside the CV loop” is exactly what is performed in that post, which clearly “violates the principle that feature selection must be done for each loop separately“.

Even for people that do not frequent academic publication sites, the issue receives a whole section in the (highly recommended) Applied Predictive Modeling book (Section 19.5 Selection Bias), as well as in the extensive online documentation of the R package caret (“Resampling and External Validation“). For a practical account of the consequences that such a mistaken application of CV can incur, see this post, which was reposted in the Kaggle blog with the characteristic title “The Dangers of Overfitting or How to Drop 50 spots in 1 minute“. Quoting:

as the competition went on, I began to use much more feature selection and preprocessing. However, I made the classic mistake in my cross-validation method by not including this in the cross-validation folds (for more on this mistake, see this short description or section 7.10.2 in The Elements of Statistical Learning). This lead to increasingly optimistic cross-validation estimates.

[…]

Lessons learned

[…]

  • On a related note, perform cross-validation the right way: include all training (feature selection, preprocessing, etc.) in each fold.

Unfortunately, as we mentioned in the beginning, the issue is far from uncommon among both academics and practitioners, especially when a feature selection procedure is involved; quoting from Applied Predictive Modelling (you can find the referenced paper by Castaldi et al. here):

Applied Predictive Modeling, p. 501

From Applied Predictive Modeling, p. 501

Or from The Elements of Statistical Learning:

From The Elements of Statistical Learning, p. 247

From The Elements of Statistical Learning, p. 247

So, the bottom line here is: if you are using feature selection in your data processing pipeline, you have to ensure that it is included in the CV (or whatever resampling technique you use) for your model assessment; otherwise, your results will be biased, and your model expected performance will be worse than assessed.

There is one exception to the above rule, and that is when your feature selection process is unsupervised, i.e. it does not take into account the response variable; quoting again from The Elements of Statistical Learning:

From The Elements of Statistical Learning, pp. 246-247

From The Elements of Statistical Learning, pp. 246-247

Nevertheless, in practice this procedure is normally performed at the data preprocessing stage, and it is not considered part of feature selection proper.

Christos - Iraklis Tsatsoulis
Latest posts by Christos - Iraklis Tsatsoulis (see all)
Subscribe
Notify of
9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ebrahimi
Ebrahimi
September 21, 2016 00:01

Dear Mr.

First, many thanks for your useful comment about this article (http://blog.revolutionanalytics.com/2015/12/caret-genetic.html). Then, I would appreciate if you could kindly let me know how to modify that code.

Best regards,

Ashray Dimri
Ashray Dimri
March 31, 2017 00:26

Thank you sir. It would be of great help to me if you tell me how to modify that code.
Thanks.

Dr.Danishuddin
Dr.Danishuddin
July 20, 2017 12:36

Dear Christos, Thanks for posting good and informative article of CV and feature selection. This article is based on the “Feature with caret’s Genetic Algorithm Option” article iN which as per you no CV has been used for feature selection, but I wonder about it as ga_ctrl <- gafsControl(functions = rfGA, # Assess fitness with RF method = "cv", # 10 fold cross validation genParallel=TRUE, # Use parallel programming allowParallel = TRUE is show the the feature have been selected for 10 CV folds. However, I am new to this field, I just need your help to modify this code… Read more »

Rawia
Rawia
March 1, 2018 23:15

I find it helpfull and very intersting article and i appreciate your efficient explanation. Here we have to take into consideration to correct selection bias So we have to use external corss validation rather than internal cross validation which means we have to apply tuning parameters and feature selection separately in each fold and not in the full training set

Flavio Tosi
Flavio Tosi
June 12, 2019 15:21

Bravo. Thx a lot. still few doubts on how to use the PCA in this context , but at least it is clear that I made a mistake. thx a lot!

trackback
July 22, 2020 09:22

[…] How NOT to perform feature selection! […]

trackback
January 15, 2021 04:13

[…] How NOT to perform feature selection!. […]

trackback
March 2, 2022 15:16

[…] the correct & wrong way to perform such processes; I have summarized the issue in a blog post, How NOT to perform feature selection! – and although the discussion is about cross-validation, it can be easily seen that the […]