My lectures will cover the basics of graphical models, also known as Bayes(ian) (Belief) Net(work)s. We will cover the basic motivations for using probabilities to represent and reason about uncertain knowledge in machine learning, and introduce graphical models as a qualitative and quantitative specification of large joint probability distributions. We will see how many common classification, regression and clustering models can be cast in this framework. We will cover the basic algorithm (called belief propagation) for inference in graphical model structures. We will also cover the major approaches to learning models from data (parameter estimation). The course will focus on directed models and the basic algorithms, but time and student desire permitting, I will also try to give some preliminary explanations of undirected models, approximate inference and learning, structure discovery and current applications.