Weekly Quiz 2 Boosting Ensemble Techniques and Model Tuning Great Learning PDF
Weekly Quiz 2 Boosting Ensemble Techniques and Model Tuning Great Learning PDF
Weekly Quiz 2 Boosting Ensemble Techniques and Model Tuning Great Learning PDF
Course Content
Attempts : 1/1
Questions : 10
Time : 45m
Due Date : CDT
Instructions
Attempt History
Attempt #1 Marks: 10
Marks: 1/1
Which of the following statement(s) is/are true for bagging and boosting?
C) Bagging: Samples are drawn from the original dataset with replacement to train each
individual weak learner
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
Boosting: Subsequent samples have more weight of those observations which had relatively
higher errors in previous weak learners
A, B, and C
B, C, and D
A and D
In bagging, weak learners are built in parallel, and samples are drawn from the original dataset
with replacement to train each individual weak learner. Example - Random Forest
In boosting, weak learners are built in sequence one after the other, and subsequent samples
have more weight of those observations which had relatively higher errors in previous weak
learners. Example - Adaboost
Marks: 1/1
Which of the following statement(s) is/are true about the Gradient Boosting trees?
1. Gradient Boosting trees work on residuals instead of changing the weights of observations.
2. By fitting new models to the residuals, the overall learner gradually improves in areas where
residuals are initially high.
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
1 and 2 You Selected
Only 1
Only 2
None
Gradient boosting trees work by fitting new models to the residuals and thereby gradually
improving the overall learner in areas where residuals are initially high.
Marks: 1/1
Alpha is a hyperparameter that is used while changing the weights of the samples in an AdaBoost
model.
A) The weight of a sample is decreased if it is incorrectly classified by the previous weak learner.
B) The weight of a sample is increased if it is incorrectly classified by the previous weak learner.
C) The alpha can be both positive or negative
D) The alpha cannot be negative
Only A
A and B
Only D
The weight of a sample is increased if it is incorrectly classified by the previous weak learner.
Depending on whether the weight should be increased or decreased, the alpha can be both
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
positive or negative.
Marks: 1/1
What is the correct sequence of steps for the gradient boosting algorithm?
A. Calculate the residual for each observation.
B. Update all predictions using previous probabilities and new output values
C. Initialize the model with an initial prediction for all observations
D. Repeat steps by creating a new tree until the maximum number of estimators is reached
E. Build a tree and calculate the output value for each leaf node
Build a tree and calculate the output value for each leaf node.
Update all predictions using previous probabilities and new output values.
Repeat steps by creating a new tree until the maximum number of estimators is reached.
Marks: 1/1
Which of the following algorithms do NOT have a ‘learning rate’ parameter/hyperparameter?
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
Gradient Boosting
XGBoost
All of these
Gradient Boosting and XGBoost are similar algorithms and have a ‘learning rate’
parameter/hyperparameter to reduce overfitting.
Marks: 1/1
Which hyperparameter of XGBoost can be used to deal with the imbalance in data?
gamma
learning_rate
colsample_by_node
In XGBoost, scale_pos_weight controls the balance of positive and negative weights and is
used to deal with imbalanced classes.
Marks: 1/1
Which of the following statements is false about XGBoost?
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
XGBoost is a boosting technique
XGBoost builds upon the idea of gradient boosting algorithm with some modifications
It provides features that help in better computing like parallelization, cache optimization,
out of core computing and distributed computing
Marks: 1/1
A) Weight of observation, to be selected while building the next weak learner, increases if the
observation is correctly classified
B) Weight of observation, to be selected while building the next weak learner, increases, if the
observation is incorrectly classified
C) Weight of observation, to be selected while building the next weak learner, decreases if the
observation is correctly classified
D) Weight of observation, to be selected while building the next weak learner, decreases if the
observation is incorrectly classified
A and B
C and D
A and D
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
Weight of observation, to be selected while building the next weak learner, increases, if the
observation is incorrectly classified, and the weight decreases if the observation is correctly
classified.
Marks: 1/1
Adaboost takes a weighted voting/weighted average of the weak learners for the final prediction.
False
Adaboost takes a weighted voting/weighted average of the weak learners for the final
prediction.
Marks: 1/1
1. Bagging and boosting use heterogeneous learners (different algorithms as different weak
learners).
Comments:
+ Add comments
Previous Next
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
Proprietary content.©Great Learning. All Rights Reserved. Unauthorized use or distribution prohibited.
This study source was downloaded by 100000854545503 from CourseHero.com on 04-06-2024 15:43:41 GMT -05:00
https://www.coursehero.com/file/209580356/Weekly-Quiz-2-Boosting-Ensemble-Techniques-and-Model-Tuning-Great-Learningpdf/
Powered by TCPDF (www.tcpdf.org)