On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 6

Q1 | How can we best represent ‘support’ for the following association rule: “If X and Y, then Z”.
Q2 | Choose the correct statement with respect to ‘confidence’ metric in association rules
Q3 | What are tree based classifiers?
Q4 | Which of the following sentences are correct in reference toInformation gain?a. It is biased towards single-valued attributesb. It is biased towards multi-valued attributesc. ID3 makes use of information gaind. The approact used by ID3 is greedy
Q5 | Multivariate split is where the partitioning of tuples is based on acombination of attributes rather than on a single attribute.
Q6 | Gain ratio tends to prefer unbalanced splits in which one partition is much smaller than the other
Q7 | The gini index is not biased towards multivalued attributed.
Q8 | Gini index does not favour equal sized partitions.
Q9 | When the number of classes is large Gini index is not a good choice.
Q10 | Attribute selection measures are also known as splitting rules.
Q11 | his clustering approach initially assumes that each data instance represents a single cluster.
Q12 | Which statement is true about the K-Means algorithm?
Q13 | KDD represents extraction of
Q14 | The most general form of distance is
Q15 | Which of the following algorithm comes under the classification
Q16 | Hierarchical agglomerative clustering is typically visualized as?
Q17 | The _______ step eliminates the extensions of (k-1)-itemsets which are not found to be frequent,from being considered for counting support
Q18 | The distance between two points calculated using Pythagoras theorem is
Q19 | Which one of these is not a tree based learner?
Q20 | Which one of these is a tree based learner?
Q21 | Which of the following classifications would best suit the student performance classification systems?
Q22 | This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration