NPTEL Deep Learning – IIT Ropar Assignment 4 Answers 2023

NPTEL Deep Learning – IIT Ropar Assignment 4 Answers 2023:- In this post, We have provided answers of Deep Learning – IIT Ropar Assignment 4. We provided answers here only for reference. Plz, do your assignment at your own knowledge.

NPTEL Deep Learning – IIT Ropar Week 4 Assignment Answer 2023

1. Which step does Nesterov accelerated gradient descent perform before finding the update size?

  • Increase the momentum
  • Estimate the next position of the parameters
  • Adjust the learning rate
  • Decrease the step size
Answer :- For Answer Click Here

2. Select the parameter of vanilla gradient descent controls the step size in the direction of the gradient.

  • Learning rate
  • Momentum
  • Gamma
  • None of the above
Answer :- For Answer Click Here

3. What does the distance between two contour lines on a contour map represent?

  • The change in the output of function
  • The direction of the function
  • The rate of change of the function
  • None of the above
Answer :- For Answer Click Here

4. Which of the following represents the contour plot of the function f(x,y) = x2−y?

Answer :- For Answer Click Here

5. What is the main advantage of using Adagrad over other optimization algorithms?

  • It converges faster than other optimization algorithms.
  • It is less sensitive to the choice of hyperparameters (learning rate).
  • It is more memory-efficient than other optimization algorithms.
  • It is less likely to get stuck in local optima than other optimization algorithms.
Answer :- For Answer Click Here

6. We are training a neural network using the vanilla gradient descent algorithm. We observe that the change in weights is small in successive iterations. What are the possible causes for the following phenomenon?

  • η is large
  • ∇w is small
  • ∇w is large
  • η is small
Answer :- For Answer Click Here

7. You are given labeled data which we call X where rows are data points and columns feature. One column has most of its values as 0. What algorithm should we use here for faster convergence and achieve the optimal value of the loss function?

  • NAG
  • Adam
  • Stochastic gradient descent
  • Momentum-based gradient descent
Answer :- For Answer Click Here

8. What is the update rule for the ADAM optimizer?

  • wt=wt−1−lr∗(mt/(vt−−+ϵ))
  • wt=wt−1−lr∗m
  • wt=wt−1−lr∗(mt/(vt+ϵ))
  • wt=wt−1−lr∗(vt/(mt+ϵ))
Answer :- For Answer Click Here

9. What is the advantage of using mini-batch gradient descent over batch gradient descent?

  • Mini-batch gradient descent is more computationally efficient than batch gradient descent.
  • Mini-batch gradient descent leads to a more accurate estimate of the gradient than batch gradient descent.
  • Mini batch gradient descent gives us a better solution.
  • Mini-batch gradient descent can converge faster than batch gradient descent.
Answer :- For Answer Click Here

10. Which of the following is a variant of gradient descent that uses an estimate of the next gradient to update the current position of the parameters?

  • Momentum optimization
  • Stochastic gradient descent
  • Nesterov accelerated gradient descent
  • Adagrad
Answer :- For Answer Click Here
Course NameDeep Learning – IIT Ropar
CategoryNPTEL Assignment Answer
Home Click Here
Join Us on TelegramClick Here

About Deep Learning IIT – Ropar

Deep Learning has received a lot of attention over the past few years and has been employed successfully by companies like Google, Microsoft, IBM, Facebook, Twitter etc. to solve a wide range of problems in Computer Vision and Natural Language Processing. In this course, we will learn about the building blocks used in these Deep Learning based solutions. Specifically,

we will learn about feedforward neural networks, convolutional neural networks, recurrent neural networks and attention mechanisms. We will also look at various optimization algorithms such as Gradient Descent, Nesterov Accelerated Gradient Descent, Adam, AdaGrad and RMSProp which are used for training such deep neural networks. At the end of this course, students would have knowledge of deep architectures used for solving various Vision and NLP tasks

CRITERIA TO GET A CERTIFICATE

Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

Deep Learning – IIT Ropar Assignment Answers 2022Answer Link
Week 1Click Here
Week 2Click Here
Week 3Click Here
Week 4Click Here
Week 5Click Here
Week 6Click Here
Week 7Click Here
Week 8Click Here
Week 9Click Here
Week 10Click Here
Week 11Click Here
Week 12Click Here

NPTEL Deep Learning – IIT Ropar Assignment 4 Answers 2022

1. Consider the movement on the 3D error surface for Vannila Gradient Descent Algorithm. Select all the options that are TRUE.

a. Smaller the gradient, slower the movement
b. Larger the gradient, faster the movement
c. Gentle the slope, smaller the gradient
d. Steeper the slope, smaller the gradient

Answer:- a, b, c

2. Pick out the drawback in Vannila gradient descent algorithm.

a. Very slow movement on gentle slopes
b. Increased oscillations before converging
c. escapes minima because of long strides
d. Very slow movement on steep slopes

Answer:- b

Answers will be Uploaded Shortly and it will be Notified on Telegram, So JOIN NOW

NPTEL Deep Learning - IIT Ropar Assignment 4 Answers 2023

3. Comment on the update at the tth update in the Momentum-based Gradient Descent.

a. weighted average of gradient
b. Polynomial weighted average
c. Exponential weighted average of gradient
d. Average of recent three gradients

Answer:- c

4. Given a horizontal slice of the error surface as shown in the figure below, if the error at the position p is 0.49 then what is the error at point q?

a. 0.70
b. 0.69
c. 0.49
d. 0

Answer:- c

5. Identify the update rule for Nesterov Accelerated Gradient Descent.

Answer:- c

6. Select all the options that are TRUE for Line search.

a. w is updated using different learning rates
b. updated value of w always gives the minimum loss
c. Involves minimum calculation
d. Best value of Learning rate is used at every step

Answer:- a, b, d

👇For Week 05 Assignment Answers👇

NPTEL Deep Learning - IIT Ropar Assignment 4 Answers 2023

7. Assume you have 1,50,000 data points, Mini batch size being 25,000, one epoch implies one pass over the data, and one step means one update of the parameters, What is the number of steps in one epoch for Mini-Batch Gradient Descent?

a. 1
b. 1,50,000
c. 6
d. 60

Answer:- c

8. Which of the following learning rate methods need to tune two hyperparameters?
I. step decay
II. exponential decay
III. 1/t decay

a. I and II
b. II and III
c. I and III
d. I, II and III

Answer:- b

9. How can you reduce the oscillations and improve the stochastic estimates of the gradient that is estimated from one data point at a time?

a. Mini-Batch
b. Adam
c. RMSprop
d. Adagrad

Answer:- a

10. Select all the statements that are TRUE.

a. RMSprop is very aggressive when decaying the learning rate
b. Adagrad decays the learning rate in proportion to the update history
c. In Adagrad, frequent parameters will receive very large updates because of the decayed learning rate
d. RMSprop has overcome the problem of Adagrad getting stuck when close to convergence

Answer:- b, c, d

For More NPTEL Answers:- CLICK HERE

Join Our Telegram:- CLICK HERE

3 thoughts on “NPTEL Deep Learning – IIT Ropar Assignment 4 Answers 2023”

Leave a Comment