Tuning an AdaBoost regressor
The important parameters to vary in an AdaBoost regressor are learning_rate
and loss
. As with the previous algorithms, we will perform a randomized parameter search to find the best scores that the algorithm can do.
How to do it...
- Import the algorithm and randomized grid search. Try a randomized parameter distribution:
from sklearn.ensemble import AdaBoostRegressor from sklearn.model_selection import RandomizedSearchCV param_dist = { 'n_estimators': [50, 100], 'learning_rate' : [0.01,0.05,0.1,0.3,1], 'loss' : ['linear', 'square', 'exponential'] } pre_gs_inst = RandomizedSearchCV(AdaBoostRegressor(), param_distributions = param_dist, cv=3, n_iter = 10, n_jobs=-1) pre_gs_inst.fit(X_train, y_train)
- View the best parameters:
pre_gs_inst.best_params_ {'learning_rate': 0.05, 'loss': 'linear', 'n_estimators': 100}
- These suggest another randomized search with parameter distribution:
param_dist = { 'n_estimators': [100], 'learning_rate' : [0.04,0.045,0.05,0.055...