On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 3

Q1 | In multiclass classification number of classes must be
Q2 | Which of the following can only be used when training data are linearlyseparable?
Q3 | Impact of high variance on the training set ?
Q4 | What do you mean by a hard margin?
Q5 | The effectiveness of an SVM depends upon:
Q6 | What are support vectors?
Q7 | A perceptron adds up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a 1, otherwise it just outputs a 0.
Q8 | What is the purpose of the Kernel Trick?
Q9 | Which of the following can only be used when training data are linearlyseparable?
Q10 | The firing rate of a neuron
Q11 | Which of the following evaluation metrics can not be applied in case of logistic regression output to compare with target?
Q12 | The cost parameter in the SVM means:
Q13 | The kernel trick
Q14 | How does the bias-variance decomposition of a ridge regression estimator compare with that of ordinaryleast squares regression?
Q15 | Which of the following are real world applications of the SVM?
Q16 | How can SVM be classified?
Q17 | Which of the following can help to reduce overfitting in an SVM classifier?
Q18 | Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting. Which of the following is best option would you more likely to consider iterating SVM next time?
Q19 | What is/are true about kernel in SVM? 1. Kernel function map low dimensional data to high dimensional space2. It’s a similarity function
Q20 | You trained a binary classifier model which gives very high accuracy on the training data, but much lower accuracy on validation data. Which is false.
Q21 | Suppose your model is demonstrating high variance across the different training sets. Which of the following is NOT valid way to try and reduce the variance?
Q22 | Suppose you are using RBF kernel in SVM with high Gamma value. What does this signify?
Q23 | We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1. We do feature normalization so that new feature will dominate other2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM
Q24 | Wrapper methods are hyper-parameter selection methods that
Q25 | Which of the following methods can not achieve zero training error on any linearly separable dataset?