Implementing AdaBoost
When the trees in the forest are trees of depth 1 (also known as decision stumps) and we perform boosting instead of bagging, the resulting algorithm is called AdaBoost.
AdaBoost adjusts the dataset at each iteration by performing the following actions:
- Selecting a decision stump
- Increasing the weighting of cases that the decision stump labeled incorrectly while reducing the weighting of correctly labeled cases
This iterative weight adjustment causes each new classifier in the ensemble to prioritize training the incorrectly labeled cases. As a result, the model adjusts by targeting highly-weighted data points.
Eventually, the stumps are combined to form a final classifier.
Implementing AdaBoost in OpenCV
Although OpenCV provides a very efficient implementation of AdaBoost, it is hidden under the Haar cascade classifier. Haar cascade classifiers are a very popular tool for face detection, which we can illustrate through the example of the Lena image:
In [1]: img_bgr = cv2.imread...