On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 23

Q1 | What do you mean by generalization error in terms of the SVM?
  • how far the hy
  • how accuratel
  • the threshold amount of error i
Q2 | The effectiveness of an SVM depends upon:
  • selection of ke
  • kernel param
  • soft margin pa
  • all of the abov
Q3 | Support vectors are the data points that lie closest to the decision
  • true
  • false
Q4 | The SVM’s are less effective when:
  • the data is line
  • the data is cl
  • the data is noisy and contains
Q5 | Suppose you are using RBF kernel in SVM with high Gamma valu
  • the model wo
  • uthe model wo
  • the model wou
  • none of the ab
Q6 | The cost parameter in the SVM means:
  • the number of cross- validations to be made
  • the kernel to be used
  • the tradeoff between misclassificati on and simplicity ofthe model
  • none of the above
Q7 | If I am using all features of my dataset and I achieve 100% accura
  • underfitting
  • nothing, the m
  • overfitting
Q8 | Which of the following are real world applications of the SVM?
  • text and hype
  • image classifi
  • clustering of n
  • all of the abov
Q9 | Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?
  • you want to in
  • you want to d
  • you will try to c
  • you will try to r
Q10 | We usually use feature normalization before using the Gaussian k
  • e 1
  • 1 and 2
  • 1 and 3
  • 2 and 3
Q11 | Linear SVMs have no hyperparameters that need to be set by cross-valid
  • true
  • false
Q12 | In a real problem, you should check to see if the SVM is separable and th
  • true
  • false
Q13 | In reinforcement learning, this feedback is usually called as .
  • overfitting
  • overlearning
  • reward
  • none of above
Q14 | In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called .
  • deep learning
  • machine learning
  • reinforcement learning
  • unsupervised learning
Q15 | When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled .
  • overfitting
  • overlearning
  • classification
  • regression
Q16 | Techniques involve the usage of both labeled and unlabeled data is called .
  • supervised
  • semi- supervised
  • unsupervised
  • none of the above
Q17 | Reinforcement learning is particularly efficient when .
  • the environment is not completely deterministic
  • it\s often very dynamic
  • it\s impossible to have a precise error measure
  • all above
Q18 | During the last few years, many algorithms have been applied to deepneural networks to learn the best policy for playing Atari video games and to teach an agent how to associate the right action with an input representing the state.
  • logical
  • classical
  • classification
  • none of above
Q19 | if there is only a discrete number of possible outcomes (called categories),the process becomes a .
  • regression
  • classification.
  • modelfree
  • categories
Q20 | Let’s say, you are working with categorical feature(s) and you have not looked at the distribution of the categorical variable in the test data.You want to apply one hot encoding (OHE) on the categorical feature(s). What challenges you may face if you have applied OHE on a categorical variable of train dataset?
  • all categories of categorical variable are not present inthe test dataset.
  • frequency distribution of categories is different in train as compared to the test dataset.
  • train and test always have same distribution.
  • both a and b
Q21 | scikit-learn also provides functions for creating dummy datasets from scratch:
  • make_classifica tion()
  • make_regressio n()
  • make_blobs()
  • all above
Q22 |           which can accept a NumPy RandomState generator or an integer seed.
  • make_blobs
  • random_state
  • test_size
  • training_size
Q23 | In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers atleast valid options
  • 1
  • 2
  • 3
  • 4
Q24 | It's possible to specify if the scaling process must include both mean and standard deviation using the parameters .
  • with_mean=tru e/false
  • with_std=true/ false
  • both a & b
  • none of the mentioned
Q25 | Which of the following selects the best K high-score features.
  • selectpercentil e
  • featurehasher
  • selectkbest
  • all above