- Batch update cases - closing cases
- From one split, how can I get the filename of the file in another split?
- Fluid for mechanical lubrication and hydraulic connection to plastic/rubber parts
- Can A/B be a better variable than separate A, B in linear regression?
- A question about Lagrange multiplier(when $\lambda=0$)
- Exporting beamer with org-mode
- magit-diff-visit-file key bindings
- Can projectile handle .sln project files
- “An Even Blacker Death”, This Time in the 21st Century
- How could a goverment sell prisoners as slaves?
- Possible Ideas For Microgravity Apartments?
- How could an AI be in the real world organically and humanely (not in a robotic body)?
- exclude array of entry types in query
- How can I fix Image Exception error when uploading image via front-end form while Preparse plugin enabled?
- Force trailing slashes onto URLs without breaking the admin?
- Attribute weight setting
- Can I use xgboost on a dataset with 1000 rows for classification problem?
- ImageNet 1000 in Different Languages
- help with error with major ticks where attribute has no iteritems
- Signs there are too many class labels
emphasise some observation weights more than the others
I want to emphasise (increase the weight) of only a subset of data. Lets say I have old and fresh data, I would like to say that old data has to have more weight and therefore has more influence in the decision than the new data.
In scikit-learn I found only class-weight parameter, but it does not change the weight of the samples, only of all samples within the class.
Is there a way to incorporate this emphasis into the gradient boosted trees in spark or xgboost in python?