**NPTEL Deep Learning Assignment 4 Answers 2023:-** In this post, We have provided answers of Deep Learning – IIT Ropar Assignment 4. We provided answers here only for reference. Plz, do your assignment at your own knowledge.

## NPTEL Deep Learning Week 4 Assignment Answers 2023

**1. Which of the following cannot be realized with single layer perceptron (only input and output layer)?**a. AND

b. OR

C. NAND

d. XOR

Answer :-For AnswerClick Here

**2. For a function f (0o, 01), if 0o and 01 are initialized at a local minimum, then what should be the values of 0o and 01 after a single iteration of gradient descent:**a. 0o and 01 will update as per gradient descent rule

b. 0o and 0, will remain same

c. Depends on the values of 0o and 01

d. Depends on the learning rate

Answer :-For AnswerClick Here

**3. Choose the correct option:**i) Inability of a model to obtain sufficiently low training error is termed as Overfitting

ii) Inability of a model to reduce large margin between training and testing error is termed as Overfitting

iii) Inability of a model to obtain sufficiently low training error is termed as Underfitting

iv) Inability of a model to reduce large margin between training and testing error is termed as Underfitting

a. Only option (i) is correct

b. Both Options (ii) and (ili) are correct

c. Both Options (¡i) and (iv) are correct

d. Only option (iv) is correct

Answer :-For AnswerClick Here

**4. **

Answer :-For AnswerClick Here

**5. Choose the correct option. Gradient of a continuous and differentiable function is:**i) is zero at a minimum

ii) is non-zero at a maximum

iii) is zero at a saddle point

iv)magnitude decreases as you get closer to the minimum

a. Only option (i) is corerct

b. Options (1), (ili) and (iv) are correct

c. Options (i) and (iv) are correct

d. Only option (ii) is correct

Answer :-For AnswerClick Here

**6. Input to SoftMax activation function is [3,1,2]. What will be the output?**a. [0.58,0.11, 0.31]

b. [0.43,0.24, 0.33]

c. [0.60,0.10,0.301

d. [0.67, 0.09,0.24]

Answer :-For AnswerClick Here

**7. **

Answer :-For AnswerClick Here

**8. Which of the following options is true?**a. In Stochastic Gradient Descent, a small batch of sample is selected randomly instead of the whole data set for each iteration. Too large update of weight values leading to faster convergence

b. In Stochastic Gradient Descent, the whole data set is processed together for update in each iteration.

c. Stochastic Gradient Descent considers only one sample for updates and has noisier updates.

d. Stochastic Gradient Descent is a non-iterative process

Answer :-For AnswerClick Here

**9. What are the steps for using a gradient descent algorithm?**

- Calculate error between the actual value and the predicted value
- Re-iterate until you find the best weights of network
- Pass an input through the network and get values from output layer
- Initialize random weight and bias
- Go to each neurons which contributes to the error and change its respective values to redu the error

a. 1, 2, 3, 4, 5

b. 5, 4, 3, 2, 1

c. 3, 2, 1, 5, 4

d. 4, 3, 1, 5, 2

Answer :-

**10.**

Answer :-For AnswerClick Here

Course Name | Deep Learning |

Category | NPTEL Assignment Answer |

Home | Click Here |

Join Us on Telegram | Click Here |