This course will focus on different classes of probabilistic models and how, based on those models, one deduces actionable information from data. The course will start by reviewing basic concepts of probability including random variables and first and second-order statistics. Building from this foundation the course will then cover probabilistic models including vectors (e.g., multivariate Gaussian), temporal (e.g., stationarity and hidden Markov models), and graphical (e.g., factor graphs). On the inference side topics such as hypothesis testing, marginalization, estimation, and message passing will be covered. Applications of these tools cover a vast range of data processing domains including machine learning, communications, search, recommendation systems, finance, robotics and navigation.