On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 10

Q1 | The correlation coefficient for two real-valued attributes is –0.85. What does this value tell you?
  • the attributes are not linearly related.
  • as the value of one attribute increases the value of the second attribute also increases
  • as the value of one attribute decreases the value of the second attribute increases
  • the attributes show a linear relationship
Q2 | 8 observations are clustered into 3 clusters using K-Means clustering algorithm. After first iteration clusters,C1, C2, C3 has following observations:C1: {(2,2), (4,4), (6,6)}C2: {(0,4), (4,0),(2,5)}C3: {(5,5), (9,9)}What will be the cluster centroids if you want to proceed for second iteration?
  • c1: (4,4), c2: (2,2), c3: (7,7)
  • c1: (6,6), c2: (4,4), c3: (9,9)
  • c1: (2,2), c2: (0,0), c3: (5,5)
  • c1: (4,4), c2: (3,3), c3: (7,7)
Q3 | In Naive Bayes equation P(C / X)= (P(X / C) *P(C) ) / P(X) which part considers "likelihood"?
  • p(x/c)
  • p(c/x)
  • p(c)
  • p(x)
Q4 | Which of the following option is / are correct regarding benefits of ensemble model? 1. Better performance2. Generalized models3. Better interpretability
  • 1 and 3
  • 2 and 3
  • 1, 2 and 3
  • 1 and 2
Q5 | What is back propagation?
  • it is another name given to the curvy function in the perceptron
  • it is the transmission of error back through the network to adjust the inputs
  • it is the transmission of error back through the network to allow weights to be adjusted so that the network can learn
  • none of the mentioned
Q6 | Which of the following is an application of NN (Neural Network)?
  • sales forecasting
  • data validation
  • risk management
  • all of the mentioned
Q7 | Neural Networks are complex ______________ with many parameters.
  • linear functions
  • nonlinear functions
  • discrete functions
  • exponential functions
Q8 | Having multiple perceptrons can actually solve the XOR problem satisfactorily: this is because each perceptron can partition off a linear part of the space itself, and they can then combine their results.
  • true – this works always, and these multiple perceptrons learn to classify even complex problems
  • false – perceptrons are mathematically incapable of solving linearly inseparable functions, no matter what you do
  • true – perceptrons can do this but are unable to learn to do it – they have to be explicitly hand-coded
  • false – just having a single perceptron is enough
Q9 | Which one of the following is not a major strength of the neural network approach?
  • neural network learning algorithms are guaranteed to converge to an optimal solution
  • neural networks work well with datasets containing noisy data
  • neural networks can be used for both supervised learning and unsupervised clustering
  • neural networks can be used for applications that require a time element to be included in the data
Q10 | Which of the following parameters can be tuned for finding good ensemble model in bagging based algorithms?1. Max number of samples2. Max features3. Bootstrapping of samples4. Bootstrapping of features
  • 1
  • 2
  • 3&4
  • 1,2,3&4
Q11 | What is back propagation?a) It is another name given to the curvy function in the perceptronb) It is the transmission of error back through the network to adjust the inputsc) It is the transmission of error back through the network to allow weights to be adjusted so that the network can learnd) None of the mentioned
  • a
  • b
  • c
  • b&c
Q12 | In an election for the head of college, N candidates are competing against each other and people are voting for either of the candidates. Voters don’t communicate with each other while casting their votes.which of the following ensembles method works similar to the discussed elction Procedure?
  • ??bagging
  • boosting
  • stacking
  • randomization
Q13 | What is the sequence of the following tasks in a perceptron?Initialize weights of perceptron randomlyGo to the next batch of datasetIf the prediction does not match the output, change the weightsFor a sample input, compute an output
  • 1, 4, 3, 2
  • 3, 1, 2, 4
  • 4, 3, 2, 1
  • 1, 2, 3, 4
Q14 |  Given above is a description of a neural network. When does a neural network model become a deep learning model?
  • when you add more hidden layers and increase depth of neural network
  • when there is higher dimensionality of data
  • when the problem is an image recognition problem
  • when there is lower dimensionality of data
Q15 | What are the steps for using a gradient descent algorithm?1)Calculate error between the actual value and the predicted value2)Reiterate until you find the best weights of network3)Pass an input through the network and get values from output layer4)Initialize random weight and bias5)Go to each neurons which contributes to the error and change its respective values to reduce the error
  • 1, 2, 3, 4, 5
  • 4, 3, 1, 5, 2
  • 3, 2, 1, 5, 4
  • 5, 4, 3, 2, 1
Q16 | A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the constant of proportionality being equal to 2. The inputs are 4, 10, 10 and 30 respectively. What will be the output?
  • 238
  • 76
  • 248
  • 348
Q17 | Increase in size of a convolutional kernel would necessarily increase the performance of a convolutional network.
  • true
  • false
Q18 | The F-test
  • an omnibus test
  • considers the reduction in error when moving from the complete model to the reduced model
  • considers the reduction in error when moving from the reduced model to the complete model
  • can only be conceptualized as a reduction in error
Q19 | What is true about an ensembled classifier?1. Classifiers that are more “sure” can vote with more conviction2. Classifiers can be more “sure” about a particular part of the space3. Most of the times, it performs better than a single classifier
  • 1 and 2
  • 1 and 3
  • 2 and 3
  • all of the above
Q20 | Which of the following option is / are correct regarding benefits of ensemble model?1. Better performance2. Generalized models3. Better interpretability
  • 1 and 3
  • 2 and 3
  • 1 and 2
  • 1, 2 and 3
Q21 | Which of the following can be true for selecting base learners for an ensemble?1. Different learners can come from same algorithm with different hyper parameters2. Different learners can come from different algorithms3. Different learners can come from different training spaces
  • 1
  • 2
  • 1 and 3
  • 1, 2 and 3