As per the PPT in Ensemble Learning, Slide 79 (AdaBoost Ensemble ML Algorithm) conveys that —
In case if the training instances/observations/rows result in Underfitting, then the new Predictors/Independent Variables focus more and more on Hard Voting Classifier Cases. This is the functionality or rationale behind the execution of AdaBoost Ensemble ML Algorithm.
However, in slide 88, it conveys that—
Based on the Weights assigned to the variables (whether a Strong Learner or a Weak Learner), the Aggregation of these Weights define the Probability of an Instance/Row/Observation Selection…
Based on the aforesaid 2 statements, don’t they contradict each other???
NOTE: Here contradiction is in the sense of Voting Classifiers.
Now my doubt is —
Once Probability is attached (or calculated) pertaining to an Instance/Row/Observation selection by the concerned Software Tool - R/Python/SAS, doesn’t this become a Soft Voting Classifier???
Kindly correct me if my understanding is wrong. Kindly elaborate on them in case the grasping of the aforesaid concepts is faulty.
Looking forward to your inputs.