On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 20

Q1 | Suppose that we have N independent variables (X1,X2 Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is true for X1?
  • relation between the x1 and y is weak
  • relation between the x1 and y is strong
  • relation between the x1 and y is neutral
  • correlation cant judge the relationship
Q2 | We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. What do you expect will happen with bias and variance as you increase the size of training data?
  • bias increases and variance increases
  • bias decreases and variance increases
  • bias decreases and variance decreases
  • bias increases and variance decreases
Q3 | Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?1. I will add more variables2. I will start introducing polynomial degree variables3. I will remove some variables
  • 1 and 2
  • 2 and 3
  • 1 and 3
  • 1, 2 and 3
Q4 | Problem:Players will play if weather is sunny. Is this statement is correct?
  • true
  • false
Q5 | For the given weather data, Calculate probability of not playing
  • 0.4
  • 0.64
  • 0.36
  • 0.5
Q6 | Suppose you have trained an SVM with linear decision boundary after training SVM, you correctly infer that your SVM model is under fitting.Which of the following option would you more likely to consider iterating SVM next time?
  • you want to increase your data points
  • you want to decrease your data points
  • you will try to calculate more variables
  • you will try to reduce the features
Q7 | The minimum time complexity for training an SVM is O(n2). According to this fact, what sizes of datasets are not best suited for SVMs?
  • large datasets
  • small datasets
  • medium sized datasets
  • size does not matter
Q8 | What do you mean by generalization error in terms of the SVM?
  • how far the hyperplane is from the support vectors
  • how accurately the svm can predict outcomes for unseen data
  • the threshold amount of error in an svm
Q9 | We usually use feature normalization before using the Gaussian kernel in SVM. What is true about feature normalization? 1.We do feature normalization so that new feature will dominate other 2. Some times, feature normalization is not feasible in case of categorical variables3. Feature normalization always helps when we use Gaussian kernel in SVM
  • 1
  • 1 and 2
  • 1 and 3
  • 2 and 3
Q10 | Support vectors are the data points that lie closest to the decision surface.
  • true
  • false
Q11 | If I am using all features of my dataset and I achieve 100% accuracy on my training set, but ~70% on validation set, what should I look out for?
  • underfitting
  • nothing, the model is perfect
  • overfitting
Q12 | Suppose you are using a Linear SVM classifier with 2 class classification problem. Now you have been given the following data in which some points are circled red that are representing support vectors.If you remove the following any one red points from the data. Does the decision boundary will change?
  • yes
  • no
Q13 | Linear SVMs have no hyperparameters that need to be set by cross-validation
  • true
  • false
Q14 | For the given weather data, what is the probability that players will play if weather is sunny
  • 0.5
  • 0.26
  • 0.73
  • 0.6
Q15 | 100 people are at party. Given data gives information about how many wear pink or not, and if a man or not. Imagine a pink wearing guest leaves, what is the probability of being a man
  • 0.4
  • 0.2
  • 0.6
  • 0.45
Q16 | Linear SVMs have no hyperparameters
  • true
  • false
Q17 | What are the different Algorithm techniques in Machine Learning?
  • supervised learning andsemi-
  • unsupervised learning andtransduction
  • both a & b
  • none of the mentioned
Q18 |             can be adopted when it's necessary to categorize a large amount of data with a fewcomplete examples or when there's the need to
  • supervised
  • semi- supervised
  • reinforcement
  • clusters
Q19 | In reinforcement learning, this feedback is usually called as      .
  • overfitting
  • overlearning
  • reward
  • none of above
Q20 | In the last decade, many researchers started trainingbigger and bigger models, built with several different layers that's why this approach is called          .
  • deep learning
  • machine learning
  • reinforcement learning
  • unsupervised learning
Q21 | What does learning exactly mean?
  • robots are programed sothat they can
  • a set of data is used todiscover the
  • learning is the ability tochange
  • it is a set of data is used todiscover the
Q22 | When it is necessary to allow the model to develop a generalization ability and avoid a common problemcalled            .
  • overfitting
  • overlearning
  • classification
  • regression
Q23 | Techniques involve the usage of both labeled and unlabeled data is called      .
  • supervised
  • semi- supervised
  • unsupervised
  • none of the above