# Data Analytics mcq unit 3 | big data analytics mcq & answers

## Data Analytics mcq with answers | big data analytics mcq

1. This  clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration This  clustering algorithm terminates when mean values computed for the current iteration of the algorithm are identical to the computed mean values for the previous iteration

1. K-Means clustering
2. conceptual clustering
3. expectation maximization
4. agglomerative clustering

K-Means clustering

2. The correlation coefficient for two real-valued attributes is –0.85. What does this value tell you?

1. The attributes are not linearly related.
2. As the value of one attribute decreases the value of the second attribute increases.
3. As the value of one attribute increases the value of the second attribute also increases.
4. The attributes show a linear relationship

As the value of one attribute decreases the value of the second attribute increases.

3. Given a rule of the form IF X THEN Y, rule confidence is defined as the conditional probability that

1. Y is false when X is known to be false.
2. Y is true when X is known to be true.
3. X is true when Y is known to be true
4. X is false when Y is known to be false.

Y is true when X is known to be true.

4. Chameleon is

1. Density based clustering algorithm
2. Partitioning based algorithm
3. Model based algorithm
4. Hierarchical clustering algorithm

Hierarchical clustering algorithm

5. Find odd man out

1. DBSCAN
2. K-Mean
3. PAM
4. None of above

DBSCAN

6. The number of iterations in apriori _

1. increases with the size of the data
2. decreases with the increase in size of the data
3. increases with the size of the maximum frequent set
4. decreases with increase in size of the maximum frequent set

increases with the size of the maximum frequent set

7.  Which of the following are interestingness measures for association rules?

1. Recall ‘
2. Lift
3. Accuracy
4. All of Above

Lift

8. Given a frequent itemset L, If |L| = k, then there are

1. 2k   – 1 candidate association rules
2. 2k   candidate association rules
3. 2k   – 2 candidate association rules
4. 2k -2 candidate association rules

2k – 2 candidate association rules (2 to power k -2)

9. _______ is an example for case based-learning

1. Decision trees
2. Neural networks
3. Genetic algorithm
4. K-nearest neighbor

K-nearest neighbor

10. The average positive difference between computed and desired outcome values.

1. mean positive
2. error mean squared
3. error mean absolute
4. error root mean squared error

error mean absolute

## data analytics mcq with answers pdf

11. Frequent item sets is

1. Superset of only closed frequent item sets
2. Superset of only maximal frequent item sets
3. Subset of maximal frequent item sets
4. Superset of both closed frequent item sets and maximal frequent item sets

Superset of both closed frequent item sets and maximal frequent item sets

12. Assume that we have a dataset containing information about 200 individuals.  A supervised data mining session has discovered the following rule: IF  age < 30 & credit card insurance = yes   THEN life insurance = yes Rule Accuracy:    70%   and  Rule Coverage:   63% How many individuals in the class life insurance= no have credit card insurance and are less than 30 years old?

1. 63
2. 38
3. 40
4. 89

38

13. Which of the following is cluster analysis?

1. Simple segmentation
2. Grouping similar objects
3. Labeled classification
4. Query results grouping

Grouping similar objects

14. A good clustering method will produce high quality clusters with

1. high inter class similarity
2. high intra class similarity
3. low intra class similarity
4. None of above

low intra class similarity

15. Which two parameters are needed for DBSCAN

1. Min threshold
2. Min points and eps
3. Min sup and min confidence
4. Number of centroids

Min points and eps

16. Which statement is true about neural network and linear regression models?

1. Both techniques build models whose output  is determined by a  linear sum of weighted input attribute values.
2. The output of both models is a categorical attribute value.
3. Both models require numeric attributes to range between 0 and 1.
4. Both models require input attributes to be numeric.

Both models require input attributes to be numeric.

17. In Apriori algorithm, if 1 item-sets are 100, then the number of candidate 2 item-sets are

1. 100
2. 200
3. 4950
4. 5000

4950

18. Significant Bottleneck in the Apriori algorithm is

1. Finding frequent itemsets
2. Pruning
3. Candidate generation
4. Number of iterations

Candidate generation

19. Machine learning techniques differ from statistical techniques in that machine learning methods

1. are better able to deal with missing and noisy data
2. typically assume an underlying distribution for the data
3. have trouble with large-sized datasets
4. are not able to explain their behavior.

are better able to deal with missing and noisy data

20. The probability of a hypothesis before the presentation of evidence.

1. a priori
2. posterior
3. conditional
4. subjective

a priori

## data analytics mcq questions and answers

21. KDD represents extraction of

1. data
2. knowledge
3. rules
4. model

knowledge

21. Which statement about outliers is true?

1. Outliers should be part of the training dataset but should not be present in the test data.
2. Outliers should be identified and removed from a dataset.
3. The nature of the problem determines how outliers are used
4. Outliers should be part of the test dataset but should not be present in the training data.

The nature of the problem determines how outliers are used

21. The most general form of distance is

1. Manhattan
2. Eucledian
3. Mean
4. Minkowski

Minkowski

21. Which Association Rule would you prefer

1. High support and medium confidence
2. High support and low confidence
3. Low support and high confidence
4. Low support and low confidence

Low support and high confidence

21. In a Rule based classifier, If there is a rule for each combination of attribute values, what do you called that rule set R

1. Exhaustive
2. Inclusive
3. Comprehensive
4. Mutually exclusive

Exhaustive

21. The apriori property means

1. If a set cannot pass a test, its supersets will also fail the same test
2. To decrease the efficiency, do level-wise generation of frequent item sets
3. To improve the efficiency, do level-wise generation of frequent item sets
4. If a set can pass a test, its supersets will fail the same test

If a set cannot pass a test, its supersets will also fail the same test

21. If  an item set ‘XYZ’ is a frequent item set, then all subsets of that frequent item set are

1. Undefined
2. Not frequent
3. Frequent
4. Can not say

Frequent

21. The probability that a person owns a sports car given that they subscribe to automotive magazine is 40%.  We also know that 3% of the adult population subscribes to automotive magazine. The probability of a person owning a sports car given that they don subscribe to automotive magazine is 30%.  Use this information to compute the probability that a person subscribes to automotive magazine given that they own a sports car

1. 0.0368
2. 0.0396
3. 0.0389
4. 0.0398

0.0396

21. Simple regression assumes a __ relationship between the input  attribute and output attribute.

2. inverse
3. linear
4. reciprocal

linear

21. To determine association rules from frequent item sets

1. Only minimum confidence needed
2. Neither support not confidence needed
3. Both minimum support and confidence are needed
4. Minimum support is needed

Both minimum support and confidence are needed

21. If {A,B,C,D} is a frequent itemset, candidate rules which is not possible is

1. C –> A
2. D –>ABCD
3. A –> BC

D –>ABCD

21. Classification rules are extracted from _

1. decision tree
2. root node
3. branches
4. siblings

decision tree

21. What does K refers in the K-Means algorithm which is a non-hierarchical clustering approach?

1. Complexity
2. Fixed value
3. No of iterations
4. number of clusters

number of clusters

21. If Linear regression model perfectly first i.e., train error is zero, then _________

1. Test error is also always zero
2. Test error is non zero
3. Couldn’t comment on Test error
4. Test error is equal to Train error

Couldn’t comment on Test error

21. How many coefficients do you need to estimate in a simple linear regression model (One independent variable)?

1. 1
2. 2
3. 3
4. 4

2

21. In a simple linear regression model (One independent variable), If we change the input variable by 1 unit. How much output variable will change?

1. by 1
2. no change
3. by intercept
4. by its slope

by its slope

21. In syntax of linear model lm(formula,data,..), data refers to __

1. Matrix
2. array
3. vector
4. list

vector

21. In the mathematical Equation of Linear Regression Y = β1 + β2X + ϵ, (β1, β2) refers to __

1. (X-intercept, Slope)
2. (Slope, X-Intercept)
3. (Y-Intercept, Slope)
4. (slope, Y-Intercept)