Machine Learning mcq questions and answers | ml mcq SPPU

ai ml mcq questions, machine learning mcq, machine learning mcq questions and answers, machine learning mcq sppu, machine learning mcq with answers, ml mcq, ml mcq questions, ml mcq sppu

Machine Learning (ML) mcq questions & answers for sppu (Pune university), Anna university and the other universities online exams. These 45+ unique mcqs on machine learning will help you to crack interviews of top bunch recruiters like tcs, infosys, congizant, accenture and Capgemini. Answers for each question is given down the options.

Machine Learning mcq with answers for online exams

Q.1 Linear Regression is a supervised machine learning algorithm.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.2 Linear Regression is mainly used for Regression.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.4 Which of the following methods do we use to find the best fit line for data in Linear Regression?
A : Least Square Error
B : Maximum Likelihood
C : Logarithmic Loss
D : Both A and B

Advertisement

Least Square Error

Q.4 In linear regression, we try to ______ the least square errors of the model to identify the line of best fit.
A : minimize
B : Maximize
C : change
D : none of the above

Advertisement

minimize

Q.5 Which of the following evaluation metrics can be used to evaluate a model while modeling a continuous output variable?
A : AUC-ROC
B : Accuracy
C : Logloss
D : Mean-Squared-Error

Advertisement

Mean-Squared-Error

Q.6 Lasso Regularization can be used for variable selection in Linear Regression.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.7 Which of the following is true about Residuals?
A : Lower is better
B : Higher is better
C : A or B depend on the situation
D : None of these

Advertisement

Lower is better

Q.8 Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data. You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95. Which of the following is true for X1?
A : Relation between the X1 and Y is weak
B : Relation between the X1 and Y is strong
C : Relation between the X1 and Y is neutral
D : Correlation can’t judge the relationship

Advertisement

Relation between the X1 and Y is strong

Q.6 The absolute value of the correlation coefficient denotes the strength of the relationship.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.9 Looking at above two characteristics, which of the following option is the correct for Pearson correlation between V1 and V2?
If you are given the two variables V1 and V2 and they are following below two characteristics.
1. If V1 increases then V2 also increases
2. If V1 decreases then V2 behavior is unknown
A : Pearson correlation will be close to 1
B : Pearson correlation will be close to -1
C : Pearson correlation will be close to 0
D : None of these

None of these

Q.10 Suppose Pearson correlation between V1 and V2 is zero. In such case, is it right to conclude that V1 and V2 do not have any relation between them?
A : TRUE
B : FALSE

Advertisement

FALSE

Q.11 Which of the following offsets, do we use in linear regression’s least square line fit? Suppose horizontal axis is independent variable and vertical axis is dependent variable.
A : Vertical offset
B : Perpendicular offset
C : Both, depending on the situation
D : None of above

Vertical offset

Q.6 Perpendicular offset are useful in case of PCA.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.12 Overfitting is more likely when you have huge amount of data to train?
A : TRUE
B : FALSE

FALSE

Q.12 Underfitting is more likely when you have huge amount of data to train.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.13 We can also compute the coefficient of linear regression with the help of an analytical method called “Normal Equation”. Which of the following is/are true about Normal Equation?
We don’t have to choose the learning rate
It becomes slow when number of features is very large
Thers is no need to iterate
A : 1 and 2
B : 1 and 3
C : 2 and 3
D : 1,2 and 3

1,2 and 3

Q.11 We can compute the coefficient of linear regression by using
A : gradient descent
B : Normal Equation
C : both A and B
D : None of above

both A and B

Q.14 Which of the following statement is true about sum of residuals of A and B? fitted regression lines (A & B : on randomly generated data. Now, find the sum of residuals in both cases A and B.
A : A has higher sum of residuals than B
B : A has lower sum of residual than B
C : Both have same sum of residuals
D : None of these

Advertisement

Both have same sum of residuals

Q.11 A residual is the vertical distance between a data point and the regression line.
A : TRUE
B : FALSE

TRUE

Q.11 Each data point has one residual with respect to the regression line.
A : TRUE
B : FALSE

TRUE

Q.11 data points have positive residual
A : if they are above the regression line,
B : if they are below the regression line,
C : if the regression line actually passes through the point,
D : None of above

if they are above the regression line,

Q.10 If the values used to train contain more outliers gradually, then the error might just increase.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.1 A Neural network can be used as a universal approximator
A : TRUE
B : FALSE

TRUE

Q.3 It is possible to design a Linear regression algorithm using a neural network?
A : TRUE
B : FALSE

Advertisement

TRUE

Q.11 data points have negetive residual
A : if they are above the regression line,
B : if they are below the regression line,
C : if the regression line actually passes through the point,
D : None of above

if they are below the regression line,

Q.11 data points have zero residual
A : if they are above the regression line,
B : if they are below the regression line,
C : if the regression line actually passes through the point,
D : None of above

if the regression line actually passes through the point,

Q.Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with penality x.
15 Choose the option which describes bias in best manner.
A : In case of very large x; bias is low
B : In case of very large x; bias is high
C : We can’t say about bias
D : None of these

Advertisement

In case of very large x; bias is high

Q.16 Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with penality x. What will happen when you apply very large penalty?
A : Some of the coefficient will become absolute zero
B : Some of the coefficient will approach zero but not absolute zero
C : Both A and B depending on the situation
D : None of these

Some of the coefficient will approach zero but not absolute zero

Q.17 Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with penality x. What will happen when you apply very large penalty in case of Lasso?
A : Some of the coefficient will become zero
B : Some of the coefficient will be approaching to zero but not absolute zero
C : Both A and B depending on the situation
D : None of these

Advertisement

Some of the coefficient will become zero

Q.18 Which of the following statement is true about outliers in Linear regression?
A : Linear regression is sensitive to outliers
B : Linear regression is not sensitive to outliers
C : Can’t say
D : None of these

Linear regression is sensitive to outliers

Q.21 Residual is
A : Residual = Observed value – predicted value
B : Residual = Observed value + predicted value
C : Residual = predicted value – Observed value
D : None of these

Residual = Observed value – predicted value

Q.19 Suppose you plotted a scatter plot between the residuals and predicted values in linear regression and you found that there is a relationship between them. Which of the following conclusion do you make about this situation?
A : Since the there is a relationship means our model is not good
B : Since the there is a relationship means our model is good
C : Can’t say
D : None of these

Since the there is a relationship means our model is not good

Q.11 If the model has perfectly captured the information in the data. There should not be any relationship between predicted values and residuals.
A : TRUE
B : FALSE

Advertisement

TRUE

Q.20 Suppose that you have a dataset D1 and you design a linear regression model of degree 3 polynomial and you found that the training and testing error is “0” or in another terms it perfectly fits the data. What will happen when you fit degree 4 polynomial in linear regression?
A : There are high chances that degree 4 polynomial will over fit the data
B : There are high chances that degree 4 polynomial will under fit the data
C : Can’t say
D : None of these

There are high chances that degree 4 polynomial will over fit the data

Q.21 If the linear regression model overfits the data then
A : training error will be zero
B : test error may not be zero.
C : both A and B
D : None of these

both A and B

Q.21 Suppose that you have a dataset D1 and you design a linear regression model of degree 3 polynomial and you found that the training and testing error is “0” or in another terms it perfectly fits the data. What will happen when you fit degree 2 polynomial in linear regression?
A : It is high chances that degree 2 polynomial will over fit the data
B : It is high chances that degree 2 polynomial will under fit the data
C : Can’t say
D : None of these

It is high chances that degree 2 polynomial will under fit the data

Q.22 Suppose that you have a dataset D1 and you design a linear regression model of degree 3 polynomial and you found that the training and testing error is “0” or in another terms it perfectly fits the data. In terms of bias and variance. Which of the following is true when you fit degree 2 polynomial?
A : Bias will be high, variance will be high
B : Bias will be low, variance will be high
C : Bias will be high, variance will be low
D : Bias will be low, variance will be low

Advertisement

Bias will be high, variance will be low

Q.24 We have been given a dataset with n records in which we have input attribute as x and output attribute as y. Suppose we use a linear regression method to model this data. To test our linear regressor, we split the data in training set and test set randomly. Now we increase the training set size gradually. As the training set size increases, what do you expect will happen with the mean training error?
A : Increase
B : Decrease
C : Remain constant
D : Can’t Say

Can’t Say

Q.25 What do you expect will happen with bias and variance as you increase the size of training data?
A : Bias increases and Variance increases
B : Bias decreases and Variance increases
C : Bias decreases and Variance decreases
D : Bias increases and Variance decreases

Advertisement

Bias increases and Variance decreases

Q.28 If the added feature is important, the training and validation error would decrease.
A : TRUE
B : FALSE

TRUE

Q.29 Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?
Add more variables
Start introducing polynomial degree variables
Remove some variables
A : 1 and 2
B : 2 and 3
C : 1 and 3
D : 1, 2 and 3

1 and 2

Q.30 Suppose, you got a situation where you find that your linear regression model is under fitting the data. Which of following regularization algorithm would you prefer?
A : L1
B : L2
C : Any
D : None of these

None of these

Q.28 Regularization is used in case of overfitting.
A : TRUE
B : FALSE

Advertisement

TRUE

Leave a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!
Scroll to Top