Data Mining and Warehousing mcq sppu unit 4

data mining and warehousing mcq, data mining and warehousing mcq sppu, data mining and warehousing mcq questions, data mining and data warehousing mcq, data mining and warehouse mcq, high performance computing quiz, dmw mcq, data mining and warehousing multiple choice questions, data mining and warehousing mcq pdf, data warehouse mcq questions and answers,
Data Mining and Warehousing mcq

Data mining and warehousing mcq

1. What does Apriori algorithm do?

  1. It mines all frequent patterns through pruning rules with lesser support
  2. It mines all frequent patterns through pruning rules with higher support
  3. Both a and b
  4. None of these
Advertisement

It mines all frequent patterns through pruning rules with lesser support

2. What does FP growth algorithm do?

  1. It mines all frequent patterns through pruning rules with lesser support
  2. It mines all frequent patterns through pruning rules with higher support
  3. It mines all frequent patterns by constructing a FP tree
  4. All of these
Advertisement

It mines all frequent patterns by constructing a FP tree

3. What techniques can be used to improve the efficiency of apriori algorithm?

  1. hash based techniques
  2. transaction reduction
  3. Partitioning
  4. All of these
Advertisement

All of these

4. What do you mean by support(A)?

  1. Total number of transactions containing A
  2. Total Number of transactions not containing A
  3. Number of transactions containing A / Total number of transactions
  4. Number of transactions not containing A / Total number of transactions
Advertisement

Number of transactions containing A / Total number of transactions

5. Which of the following is direct application of frequent itemset mining?

  1. Social Network Analysis
  2. Market Basket Analysis
  3. outlier detection
  4. intrusion detection
Advertisement

Market Basket Analysis

6. What is not true about FP growth algorithms?

  1. It mines frequent itemsets without candidate generation
  2. There are chances that FP trees may not fit in the memory
  3. FP trees are very expensive to build
  4. It expands the original database to build FP trees
Advertisement

It expands the original database to build FP trees

7. When do you consider an association rule interesting?

  1. If it only satisfies min_support
  2. If it only satisfies min_confidence
  3. If it satisfies both min_support and min_confidence
  4. There are other measures to check so
Advertisement

If it satisfies both min_support and min_confidence

8. What is the difference between absolute and relative support?

  1. Absolute -Minimum support count threshold and Relative-Minimum support threshold
  2. Absolute-Minimum support threshold and Relative-Minimum support count threshold
  3. Both a and b
  4. None of these

Absolute -Minimum support count threshold and Relative-Minimum support threshold

9. What is the relation between candidate and frequent itemsets?

  1. A candidate itemset is always a frequent itemset
  2. A frequent itemset must be a candidate itemset
  3. No relation between the two
  4. None of these
Advertisement

A frequent itemset must be a candidate itemset

10. Which technique finds the frequent itemsets in just two database scans?

  1. Patitioning
  2. sampling
  3. hashing
  4. None of these

Patitioning

Data mining and warehousing mcq sppu

11. Which of the following is true?

  1. Both apriori and FP-Growth uses horizontal data format
  2. Both apriori and FP-Growth uses vertical data format
  3. Both a and b
  4. None of these

Both apriori and FP-Growth uses horizontal data format

12. What is the principle on which Apriori algorithm work?

  1. If a rule is infrequent, its specialized rules are also infrequent
  2. If a rule is infrequent, its generalized rules are also infrequent
  3. Both a and b
  4. None of these
Advertisement

If a rule is infrequent, its specialized rules are also infrequent

13. Which of these is not a frequent pattern mining algorithm

  1. Apriori
  2. FP growth
  3. Decision trees
  4. Eclat

Decision trees

14. Which algorithm requires fewer scans of data?

  1. Apriori
  2. FP growth
  3. Both a and b
  4. None of these
Advertisement

FP growth

15. What are Max_confidence, Cosine similarity, All_confidence?

  1. Frequent pattern mining algorithms
  2. Measures to improve efficiency of apriori
  3. Pattern evaluation measure
  4. None of these

Pattern evaluation measure

16. Linear regression – involves finding the________ line to fit two attributes (or variables)

  1. An itemset for which at least one proper super itemset has same support
  2. An item set whose no proper super- itemset has same support
  3. Both a and b
  4. None of these
Advertisement

An item set whose no proper super- itemset has same support

17. What are closed frequent itemsets?

  1. A closed itemset
  2. A frequent itemset
  3. An itemset which is both closed and frequent
  4. None of these

An itemset which is both closed and frequent

18. What are maximal frequent itemsets?

  1. A frequent item set whose no super-itemset is frequent
  2. A frequent itemset whose super-itemset is also frequent
  3. Both a and b
  4. None of these
Advertisement

A frequent item set whose no super-itemset is frequent

19. Why is correlation analysis important?

  1. To make apriori memory efficient
  2. To weed out uninteresting frequent itemsets
  3. To find large number of interesting itemsets
  4. To restrict the number of database iterations

To weed out uninteresting frequent itemsets

data mining and warehousing mcq sppu

20. What will happen if support is reduced?

  1. Number of frequent itemsets remains same
  2. Some itemsets will add to the current set of frequent itemsets
  3. Some itemsets will become infrequent while others will become frequent
  4. Can not say
Advertisement

Some itemsets will add to the current set of frequent itemsets

21. Can FP growth algorithm be used if FP tree cannot be fit in memory?

  1. Yes
  2. No
  3. Both a and b
  4. None of these

No

22. What is association rule mining?

  1. Same as frequent itemset mining
  2. Finding of strong association rules using frequent itemsets
  3. Both a and b
  4. None of these

Finding of strong association rules using frequent itemsets

23. What is frequent pattern growth?

  1. Same as frequent itemset mining
  2. Use of hashing to make discovery of frequent itemsets more efficient
  3. Mining of frequent itemsets without candidate generation
  4. None of these

Mining of frequent itemsets without candidate generation

24. When is sub-itemset pruning done?

  1. A frequent itemset ‘P’ is a proper subset of another frequent itemset ‘Q’
  2. Support (P) = Support(Q)
  3. When both a and b is true
  4. When a is true and b is not
Advertisement

When both a and b is true

25. Which of the following is not null invariant measure(that does not considers null transactions)?

  1. all_confidence
  2. max_confidence
  3. cosine measure
  4. lift

lift

26. The apriori algorithm works in a ..and ..fashion?

  1. top-down and depth-first
  2. top-down and breath-first
  3. bottom-up and depth-first
  4. bottom-up and breath-first
Advertisement

bottom-up and breath-first

27. Our use of association analysis will yield the same frequent itemsets and strong association rules whether a specific item occurs once or three times in an individual transaction

  1. TRUE
  2. FALSE
  3. Both a and b
  4. None of these

TRUE

28. In association rule mining the generation of the frequent itermsets is the computational intensive step.

  1. TRUE
  2. FALSE
  3. Both a and b
  4. None of these
Advertisement

TRUE

29. The number of iterations in apriori __

  1. increases with the size of the data
  2. decreases with the increase in size of the data
  3. increases with the size of the maximum frequent set
  4. decreases with increase in size of the maximum frequent set

increases with the size of the maximum frequent set

30. Which of the following are interestingness measures for association rules?

  1. recall
  2. lift
  3. accuracy
  4. compactness
Advertisement

lift

data mining and warehousing mcq questions

31. Frequent item sets is

  1. Superset of only closed frequent item sets
  2. Superset of only maximal frequent item sets
  3. Subset of maximal frequent item sets
  4. Superset of both closed frequent item sets and maximal frequent item sets

Superset of both closed frequent item sets and maximal frequent item sets

32. Assume that we have a dataset containing information about 200 individuals. A supervised data mining session has discovered the following rule: IF age < 30 & credit card insurance = yes THEN life insurance = yes Rule Accuracy: 70% and Rule Coverage: 63% How many individuals in the class life insurance= no have credit card insurance and are less than 30 years old?

  1. 63
  2. 30
  3. 38
  4. 70

38

33. In Apriori algorithm, if 1 item-sets are 100, then the number of candidate 2 item-sets are

  1. 100
  2. 4950
  3. 200
  4. 5000

4950

34. Significant Bottleneck in the Apriori algorithm is

  1. Finding frequent itemsets
  2. pruning
  3. Candidate generation
  4. Number of iterations
Advertisement

Candidate generation

35. Which Association Rule would you prefer

  1. High support and medium confidence
  2. High support and low confidence
  3. Low support and high confidence
  4. Low support and low confidence

Low support and high confidence

36. The apriori property means

  1. If a set cannot pass a test, its supersets will also fail the same test
  2. To decrease the efficiency, do level-wise generation of frequent item sets
  3. To improve the efficiency, do level-wise generation of frequent item sets
  4. If a set can pass a test, its supersets will fail the same test

If a set cannot pass a test, its supersets will also fail the same test

37. If an item set ‘XYZ’ is a frequent item set, then all subsets of that frequent item set are

  1. undefined
  2. not frequent
  3. frequent
  4. cant say
Advertisement

frequent

38. To determine association rules from frequent item sets

  1. Only minimum confidence needed
  2. Neither support not confidence needed
  3. Both minimum support and confidence are needed
  4. Minimum support is needed

Both minimum support and confidence are needed

39. If {A,B,C,D} is a frequent itemset, candidate rules which is not possible is

  1. C –> A
  2. D –> ABCD
  3. A –> BC
  4. B –> ADC
Advertisement

D –> ABCD

Machine Learning mcq questions and answers | ml mcq SPPU

Machine Learning mcq on unit 6 | Hierarchical Clustering mcq

Data Mining and Warehousing mcq pdf

data mining and warehousing mcq, data mining and warehousing mcq sppu, data mining and warehousing mcq questions, data mining and data warehousing mcq, data mining and warehouse mcq, high performance computing quiz, dmw mcq, data mining and warehousing multiple choice questions, data mining and warehousing mcq pdf, data warehouse mcq questions and answers, data mining quiz

Leave a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!
Scroll to Top