On This Page

This set of Machine Learning (ML) Multiple Choice Questions & Answers (MCQs) focuses on Machine Learning Set 6

Q1 | How can we best represent ‘support’ for the following association rule: “If X and Y, then Z”.
  • {x,y}/(total number of transactions)
  • {z}/(total number of transactions)
  • {z}/{x,y}
  • {x,y,z}/(total number of transactions)
Q2 | Choose the correct statement with respect to ‘confidence’ metric in association rules
  • it is the conditional probability that a randomly selected transaction will include all the items in the consequent given that the transaction includes all the items in the antecedent.
  • a high value of confidence suggests a weak association rule
  • it is the probability that a randomly selected transaction will include all the items in the consequent as well as all the items in the antecedent.
  • confidence is not measured in terms of (estimated) conditional probability.
Q3 | What are tree based classifiers?
  • classifiers which form a tree with each attribute at one level
  • classifiers which perform series of condition checking with one attributeat a time
  • both options except none
  • none of the options
Q4 | Which of the following sentences are correct in reference toInformation gain?a. It is biased towards single-valued attributesb. It is biased towards multi-valued attributesc. ID3 makes use of information gaind. The approact used by ID3 is greedy
  • a and b
  • a and d
  • b, c and d
  • all of the above
Q5 | Multivariate split is where the partitioning of tuples is based on acombination of attributes rather than on a single attribute.
  • true
  • false
Q6 | Gain ratio tends to prefer unbalanced splits in which one partition is much smaller than the other
  • true
  • false
Q7 | The gini index is not biased towards multivalued attributed.
  • true
  • false
Q8 | Gini index does not favour equal sized partitions.
  • true
  • false
Q9 | When the number of classes is large Gini index is not a good choice.
  • true
  • false
Q10 | Attribute selection measures are also known as splitting rules.
  • true
  • false
Q11 | his clustering approach initially assumes that each data instance represents a single cluster.
  • expectation maximization
  • k-means clustering
  • agglomerative clustering
  • conceptual clustering
Q12 | Which statement is true about the K-Means algorithm?
  • the output attribute must be cateogrical
  • all attribute values must be categorical
  • all attributes must be numeric
  • attribute values may be either categorical or numeric
Q13 | KDD represents extraction of
  • data
  • knowledge
  • rules
  • model
Q14 | The most general form of distance is
  • manhattan
  • eucledian
  • mean
  • minkowski
Q15 | Which of the following algorithm comes under the classification
  • apriori
  • brute force
  • dbscan
  • k-nearest neighbor
Q16 | Hierarchical agglomerative clustering is typically visualized as?
  • dendrogram
  • binary trees
  • block diagram
  • graph
Q17 | The _______ step eliminates the extensions of (k-1)-itemsets which are not found to be frequent,from being considered for counting support
  • partitioning
  • candidate generation
  • itemset eliminations
  • pruning
Q18 | The distance between two points calculated using Pythagoras theorem is
  • supremum distance
  • eucledian distance
  • linear distance
  • manhattan distance
Q19 | Which one of these is not a tree based learner?
  • cart
  • id3
  • bayesian classifier
  • random forest
Q20 | Which one of these is a tree based learner?
  • rule based
  • bayesian belief network
  • bayesian classifier
  • random forest
Q21 | Which of the following classifications would best suit the student performance classification systems?
  • if...then... analysis
  • market-basket analysis
  • regression analysis
  • cluster analysis
Q22 | This clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration
  • k-means clustering
  • conceptual clustering
  • expectation maximization
  • agglomerative clustering