NPTEL An Introduction to Artificial Intelligence Assignment 9 Answers

NPTEL An Introduction to Artificial Intelligence Assignment 9 Answers 2022:- All the Answers provided here to help the students as a reference, You must submit your assignment at your own knowledge

What is An Introduction to Artificial Intelligence?

An Introduction to Artificial Intelligence by IIT Delhi course introduces the variety of concepts in the field of artificial intelligence. It discusses the philosophy of AI, and how to model a new problem as an AI problem. It describes a variety of models such as search, logic, Bayes nets, and MDPs, which can be used to model a new problem. It also teaches many first algorithms to solve each formulation. The course prepares a student to take a variety of focused, advanced courses in various subfields of AI.

CRITERIA TO GET A CERTIFICATE

Average assignment score = 25% of the average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF THE AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

An Introduction to Artificial IntelligenceAnswers
Assignment 1Click Here
Assignment 2Click Here
Assignment 3Click Here
Assignment 4Click Here
Assignment 5Click Here
Assignment 6Click Here
Assignment 7Click Here
Assignment 8Click Here
Assignment 9Click Here
Assignment 10Click Here
Assignment 11Click Here
Assignment 12NA

NPTEL An Introduction to Artificial Intelligence Assignment 9 Answers 2022:-

Q1. Which of the following is true about the MAP (Maximum a posteriori estimate) estimation learning framework?

a. It is equivalent to Maximum Likelihood learning with infinite data 
b. It is equivalent to Maximum Likelihood learning if P(θ) is independent of θ 
c. it can be used without having any prior knowledge about the parameters 
d. The performance of MAP is better with dense data compared to sparse data

Answer:- a, d

Answers will be Uploaded Shortly and it will be Notified on Telegram, So JOIN NOW

unique-jankari-telegram

Q2. What facts are true about smoothing?

  • Smoothed estimates of probabilities fit the evidence better than un-smoothed estimates. 
  • The process of smoothing can be viewed as imposing a prior distribution over the set of parameters. 
  • Smoothing allows us to account for data which wasn’t seen in the evidence. 
  • Smoothing is a form of regularization which prevents overfitting in Bayesian networks.

Answer: a, c

Q3. Consider three boolean variables X, Y, and Z. Consider the following data:

There can be multiple Bayesian networks that can be used to model such a universe. Assume that we assume a Bayesian Network as shown below:

If the value of the parameter P(¬z|x,¬y) is m/n such that m and n have no common factors. Then, what is the value of m+n? Assume add-one smoothing.

Answer: 343.6

Q4. Consider the following Bayesian Network from which we wish to compute P(x|z) using rejection sampling:

Answer: 86.9

Q5. Assume that we toss a biased coin with heads probability p, 100 times. We get heads 66 times out of 100. If the Maximum Likelihood estimate of the parameter p is m/n where m and n don’t have common factors,
then the value of m+n is?

Answer: 77

👇FOR NEXT WEEK ASSIGNMENT ANSWERS👇

unique-jankari-telegram

Q6. Now, assume that we had a prior distribution over p as shown below:

Answer:- 6.5

Q7. Which of the following task(s) are not suited for a goal based agent?

Answer: b, c

Q8. Which of the following are true ?

  • Rejection sampling is very wasteful when the probability of getting the evidence in the samples is very low. 
  • We perform conditional probability weighting on the samples while doing Gibbs Sampling in MCMC algorithm since we have already fixed the evidence variables. 
  • We perform random walk while sampling variables in Likelihood Weighting, MCMC with Gibbs sampling, but not in Rejection sampling. 
  • Likelihood Weighting functions well if we have many evidence wars with some samples having nearly all the total weight

Answer: a

Q9. Consider the following Bayesian Network:

  • P(C|A,B,D,F,E) = α. P(C|A). P(C|B) 
  • P(C|A,B,D,F,E) = α. P(C|A,B) 
  • P(C|A,B,D,F,E) = α. P(C|A,B). P(D|C,E) 
  • P(C|A,B,D,F,E) = α. P(C|A,B,D,E)

Answer: b, c

If there are any changes in answers will notify you on telegram so you can get a 100% score, So Join

Q10. Which of the following options are correct about the environment of Tic Tac Toe?

  • Fully observable 
  • Stochastic 
  • Continuous 
  • Static

Answer: a, c

Disclaimer:- We do not claim 100% surety of solutions, these solutions are based on our sole expertise, and by using posting these answers we are simply looking to help students as a reference, so we urge do your assignment on your own.

For More NPTEL Answers:- CLICK HERE

Join Our Telegram:- CLICK HERE

NPTEL An Introduction to Artificial Intelligence Assignment 9 Answers 2022:- All the Answers provided here to help the students as a reference, You must submit your assignment at your own knowledge

Leave a Comment