Hi CloudX Lab Team,

Another clarification is required wrt Overfitting of Adaboost Ensemble ML Algorithm.

In case, if there is * overfitting*, we need to adopt the

*method.*

**Regularization**Now based on this * Regularization*, we need to adopt either one of the 2 strategies viz.:

a) To reduce the number of Estimators (i.e. the number of Trees in the Forest). In other words, we need to reduce the number of Variables from the Sampling Dataset.

OR—

b) To to regularize the Base Estimator.

- Now this
contains the**Base Estimator**.**Base Decision Node** - Now this
lies**Base Decision Node**to the**close**.**Root of the Tree** - This
comprises of**Base Decision Node**that lie close to the**Important Features**(as stated in Slide 71).**root of the tree** - While growing the tree(s) in the forest, in order to
either of the**split the nodes**can be adopted viz.:**two methodologies/strategies**

a) Random subset of Features/Predictors/Variables OR

b) Random threshold values for each feature

So technically is this the appropriate way of describing the Overfitting of AdaBoost Ensemble ML Algorithm & how to resolve it???

Kindly correct me (each & every step), if my though process is in a wrong direction.

Sincerely looking forward to your valuable inputs.