The midterm will include conceptual questions that require short textual answers, as well as problem-solving questions. For the exam, you should know the overall workings and be able to run through basic steps of the following algorithms: * Forward Sampling, Rejection Sampling, Likelihood Weighting * Filtering, Prediction, Smoothing and Viterbi in temporal models * Roll-up filtering and particle flitering in DBN * Value and policy iteration * Belief state update in POMDP * DT-learning with any given measure of splitting value Understand and be able to answer questions related to the following concepts: - Background on Bnets, including basic probability axioms; how to obtain a conditional distribution P(A|B) from a full joint P(A,B,C,D); how to decompose a joint distribution using the chain rule, Bayes Rule; how to obtain a given posterior probability from a Bayesian network. - Markov Blanket of a node in a BN, d-separation. - Conditional dependencies in a given Bnets - Understanding of the Andes system and its user models as they are described in the Andes paper we discussed in class (such as motivation, general structure of the Andes networks, the relations they represent and how they quantify them, types of inferences the Andes models are used for) - Compact probability distributions (e.g, Noisy-or) - why do we need approximate algorithms in Bayesian networks - What is sampling and how to do it - Why does sampling work - why do the different sampling algorithms we have covered generate consistent predictions - Temporal models, what they are and why we need them. - HMM and Dynamic Bnets - Decision processes and Markov decision processes: definition of planning horizon, policy, optimal policy, different types of reward functions - POMDP: belief states, observation model, belief state update, optimal policies in POMDP, POMDP as MDPs, Dynamic Decision Networks - Basic concepts in Machine Learning, such as bias, overfitting, evaluation - What is a Decision Tree, how to build one and how to use it for classification, - Measure of Information Gain for choosing splitting attributes - Expressiveness of decision trees (in general and for specific trees)