- Human-readable archive format for posting source examples?
- Free lightweight to do list/organizer with task priority
- How to keep a Garmin GTX330 turned off when powering on avionics?
- Missed procedure below MDA/DA/DH
- What kind of speed require for Hercules C-130J to do a loop?
- Cosa vuol dire “far solette” in questo testo?
- GSD Female cancer when not mated?
- Application for the Deep Web
- Why not a landing platform with solar cells on the Mars 2020 rover for the helicopter scout?
- create a mask in material node editor using rotoscope
- Cycles render get overexposed last second
- Import Node Into Custom Nodetree Python
- Some textures ending up black when rendered
- bring the output of compositing into blender material node editor
- How do I unwrap a model proportionally?
- Unusual lines coming up in normal map!
- Role of MTU in EIGRP metric calculation!
- Detect interesting testcases
- ING-gerund VS --OF— noun phrase
- copy/pasting or copying/pasting?
How to perform feature selection and hyperparameter optimization in cross validation?
note: I read a lot of the questions already posted on this topic, but still have some confusion.
I want to perform feature selection and model selection for multiple models e.g. Random forest (RF), Support vector machine (SVM), lasso regression. There seem to be a few ways to do feature selection (fs) or hyper parameter optimization (hpo) through cross validation (cv). My data set is n~700 (sample size) and p = 272 (number of features). However, adding another set of features could increase p to ~20272.
My current plan is the following:
Run whatever resampling method (k fold or Monte carlo) to get different splits of pseudo test and training data.
In each iteration of resampling:
Run feature selection on pseudo training data
Increment counts for which top variables are selected
Train model using those features on pseudo training data
Get estimate for how well it does by testing on pseudo test data
Now we can select our feature set by taking the top k selected varia