On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 11

Q1 | True or False: Ensemble learning can only be applied to supervised learning methods.
  • true
  • false
Q2 | True or False: Ensembles will yield bad results when there is significant diversity among the models.Note: All individual models have meaningful and good predictions.
  • true
  • false
Q3 | Which of the following is / are true about weak learners used in ensemble model?1. They have low variance and they don’t usually overfit2. They have high bias, so they can not solve hard learning problems3. They have high variance and they don’t usually overfit
  • 1 and 2
  • 1 and 3
  • 2 and 3
  • none of these
Q4 | True or False: Ensemble of classifiers may or may not be more accurate than any of its individual model.
  • true
  • false
Q5 | If you use an ensemble of different base models, is it necessary to tune the hyper parameters of all base models to improve the ensemble performance?
  • yes
  • no
  • can’t say
Q6 | Generally, an ensemble method works better, if the individual base models have ____________?Note: Suppose each individual base models have accuracy greater than 50%.
  • less correlation among predictions
  • high correlation among predictions
  • correlation does not have any impact on ensemble output
  • none of the above
Q7 | In an election, N candidates are competing against each other and people are voting for either of the candidates. Voters don’t communicate with each other while casting their votes. Which of the following ensemble method works similar to above-discussed election procedure?Hint: Persons are like base models of ensemble method.
  • bagging
  • boosting
  • a or b
  • none of these
Q8 | Suppose there are 25 base classifiers. Each classifier has error rates of e = 0.35.Suppose you are using averaging as ensemble technique. What will be the probabilities that ensemble of above 25 classifiers will make a wrong prediction?Note: All classifiers are independent of each other
  • 0.05
  • 0.06
  • 0.07
  • 0.09
Q9 | In machine learning, an algorithm (or learning algorithm) is said to be unstable if a small change in training data cause the large change in the learned classifiers.True or False: Bagging of unstable classifiers is a good idea
  • true
  • false
Q10 | Which of the following parameters can be tuned for finding good ensemble model in bagging based algorithms?1. Max number of samples2. Max features3. Bootstrapping of samples4. Bootstrapping of features
  • 1 and 3
  • 2 and 3
  • 1 and 2
  • all of above
Q11 | How is the model capacity affected with dropout rate (where model capacity means the ability of a neural network to approximate complex functions)?
  • model capacity increases in increase in dropout rate
  • model capacity decreases in increase in dropout rate
  • model capacity is not affected on increase in dropout rate
  • none of these
Q12 | True or False: Dropout is computationally expensive technique w.r.t. bagging
  • true
  • false
Q13 | Suppose, you want to apply a stepwise forward selection method for choosing the best models for an ensemble model. Which of the following is the correct order of the steps?Note: You have more than 1000 models predictions1. Add the models predictions (or in another term take the average) one by one in the ensemble which improves the metrics in the validation set.2. Start with empty ensemble3. Return the ensemble from the nested set of ensembles that has maximum performance on the validation set
  • 1-2-3
  • 1-3-4
  • 2-1-3
  • none of above
Q14 | Below are the two ensemble models:1. E1(M1, M2, M3) and2. E2(M4, M5, M6)Above, Mx is the individual base models.Which of the following are more likely to choose if following conditions for E1 and E2 are given?E1: Individual Models accuracies are high but models are of the same type or in another term less diverseE2: Individual Models accuracies are high but they are of different types in another term high diverse in nature
  • e1
  • e2
  • any of e1 and e2
  • none of these
Q15 | True or False: In boosting, individual base learners can be parallel.
  • true
  • false
Q16 | Which of the following is true about bagging?1. Bagging can be parallel2. The aim of bagging is to reduce bias not variance3. Bagging helps in reducing overfitting
  • 1 and 2
  • 2 and 3
  • 1 and 3
  • all of these
Q17 | Suppose you are using stacking with n different machine learning algorithms with k folds on data.Which of the following is true about one level (m base models + 1 stacker) stacking?Note:Here, we are working on binary classification problemAll base models are trained on all featuresYou are using k folds for base models
  • you will have only k features after the first stage
  • you will have only m features after the first stage
  • you will have k+m features after the first stage
  • you will have k*n features after the first stage
Q18 | Which of the following is the difference between stacking and blending?
  • stacking has less stable cv compared to blending
  • in blending, you create out of fold prediction
  • stacking is simpler than blending
  • none of these
Q19 | Which of the following can be one of the steps in stacking?1. Divide the training data into k folds2. Train k models on each k-1 folds and get the out of fold predictions for remaining one fold3. Divide the test data set in “k” folds and get individual fold predictions by different algorithms
  • 1 and 2
  • 2 and 3
  • 1 and 3
  • all of above
Q20 | Q25. Which of the following are advantages of stacking?1) More robust model2) better prediction3) Lower time of execution
  • 1 and 2
  • 2 and 3
  • 1 and 3
  • all of the above
Q21 | Which of the following are correct statement(s) about stacking?A machine learning model is trained on predictions of multiple machine learning modelsA Logistic regression will definitely work better in the second stage as compared to other classification methodsFirst stage models are trained on full / partial feature space of training data
  • 1 and 2
  • 2 and 3
  • 1 and 3
  • all of above
Q22 | Which of the following is true about weighted majority votes?1. We want to give higher weights to better performing models2. Inferior models can overrule the best model if collective weighted votes for inferior models is higher than best model3. Voting is special case of weighted voting
  • 1 and 3
  • 2 and 3
  • 1 and 2
  • 1, 2 and 3
Q23 | Which of the following is true about averaging ensemble?
  • it can only be used in classification problem
  • it can only be used in regression problem
  • it can be used in both classification as well as regression
  • none of these
Q24 | How can we assign the weights to output of different models in an ensemble?1. Use an algorithm to return the optimal weights2. Choose the weights using cross validation3. Give high weights to more accurate models
  • 1 and 2
  • 1 and 3
  • 2 and 3
  • all of above