# NPTEL Deep Learning – IIT Ropar Assignment 3 Answers 2022

NPTEL Deep Learning – IIT Ropar Assignment 3 Answers 2022:- In this post, We have provided answers of NPTEL Deep Learning – IIT Ropar Assignment 3. We provided answers here only for reference. Plz, do your assignment at your own knowledge.

## NPTEL Deep Learning – IIT Ropar Assignment 3 Answers 2022 [July-Dec]

1. Assume you are developing a model to predict the probability as an output. Pick out the appropriate Activation function.

a. Linear
b. Sigmoid
c. Tanh
d. Relu

`Answer:- b`

2. The pre-activation at layer i can be best described as the

a. weighted sum of all the inputs at layer i
b. sum of all the the inputs at layer i
c. weighted sum of all the inputs at layer i+1
d. sum of all the inputs at layer i+1
e. weighted sum of all the inputs at layer i−1
f. sum of all the inputs at layer i−1

`Answer:- a`

3. Consider a Machine Learning model that is applied to a specific set of inputs. Actual output being yiyi = [10, 5, 7, 8, 6] and the predicted output being y^iy^i = [9, 6, 5, 7, 5], Compute Mean Squared error loss.

`Answer:- 1.60`

4. Consider a Classification problem with k classes. The output being a probability distribution, which of the following is the best output function?

a. Linear
b. Sigmoid
c. tanh
d. softmax

`Answer:- d`

5. Given the output yj=O(al)j and al=[2.5,3.6,4.2,5]yj=O(al)j and al=[2.5,3.6,4.2,5]. If ‘O’ is the softmax function, compute the value of y^=[y^1,y^2,y^3,y^4]?

a. [0.046, 0.139, 0.253, 0.562]
b. [0.046, 0.253, 0.562, 0.139]
c. [0.253, 0.046, 0.139, 0.562]
d. [0.562, 0.046, 0.139, 0.253]

`Answer:- a`

6. The information content is high for an event when the probability of the event is

a. high
b. low
c. 1
d. maximum

`Answer:- b`

7. Assume you have four inputs to a Feed Forward neural network, the first hidden layer also has four neurons, and there are three output classes, what is the dimension of the weight matrix, W1W1 between the input layer and the first hidden layer, given that there is only one hidden layer?

a. R3×3
b. R4×3
c. R4×4
d. R3×4

`Answer:- c `

8. In a Feed Forward Neural Network, if the outputs take real values then which of the following output activation function and error function do you prefer?

a. Linear, cross entropy
b. Softmax, cross entropy
c. Linear, Squared error
d. Softmax, Squared error

`Answer:- c`

9. The activation layer at any layer ii is given

a. hi(x)=bi+Wihi−1(x)
b. hi(x)=g(ai(x))
c. hi(x)=O(aL)
d. hi(x)=ai+Wihi−1(x)

`Answer:- b`

10. Identify the loss function for a classification problem to choose one out of K Classes.

a. Squared
b. Absolute
c. MinimizeθL(θ)=−log(yl^)
d. MaximizeθL(θ)=−log(yl^)

`Answer:- c`

## COURSE LAYOUT

• Week 1 :  (Partial) History of Deep Learning, Deep Learning Success Stories, McCulloch Pitts Neuron, Thresholding Logic, Perceptrons, Perceptron Learning Algorithm
• Week 2 :  Multilayer Perceptrons (MLPs), Representation Power of MLPs, Sigmoid Neurons, Gradient Descent, Feedforward Neural Networks, Representation Power of Feedforward Neural Networks
• Week 3 :  FeedForward Neural Networks, Backpropagation
• Week 4 :  Gradient Descent (GD), Momentum Based GD, Nesterov Accelerated GD, Stochastic GD, AdaGrad, RMSProp, Adam, Eigenvalues and eigenvectors, Eigenvalue Decomposition, Basis
• Week 5 :  Principal Component Analysis and its interpretations, Singular Value Decomposition
• Week 6 :  Autoencoders and relation to PCA, Regularization in autoencoders, Denoising autoencoders, Sparse autoencoders, Contractive autoencoders
• Week 7 :  Regularization: Bias Variance Tradeoff, L2 regularization, Early stopping, Dataset augmentation, Parameter sharing and tying, Injecting noise at input, Ensemble methods, Dropout
• Week 8:  Greedy Layerwise Pre-training, Better activation functions, Better weight initialization methods, Batch Normalization
• Week 9:  Learning Vectorial Representations Of Words
• Week 10: Convolutional Neural Networks, LeNet, AlexNet, ZF-Net, VGGNet, GoogLeNet, ResNet, Visualizing Convolutional Neural Networks, Guided Backpropagation, Deep Dream, Deep Art, Fooling Convolutional Neural Networks
• Week 11: Recurrent Neural Networks, Backpropagation through time (BPTT), Vanishing and Exploding Gradients, Truncated BPTT, GRU, LSTMs
• Week 12: Encoder-Decoder Models, Attention Mechanism, Attention over images

## CRITERIA TO GET A CERTIFICATE

Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

### 5 thoughts on “NPTEL Deep Learning – IIT Ropar Assignment 3 Answers 2022”

1. I am looking for Deep Learning for Computer Vision – IIT Hyderabad, Week 3 solutions but I cannot find it anywhere. Even for Week 1 and Week 2 I hardly found any help over the internet and I feel unique jankari should work on bringing out solutions for the same. Please help for week 4. It is a deep request. Thank You