Title: Statistical Relational Learning
Prof. Pedro Domingos, Department of Computer Science and Engineering
University of Washington
Most machine learning algorithms assume that data points are i.i.d. (independent and identically distributed), but this is seldom the case in reality. Objects have varying distributions and interact with each other in complex ways. Domains where this is prominently the case include the Web, social networks, information extraction, perception, medical diagnosis/epidemiology, molecular and systems biology, ubiquitous computing, and others. Statistical relational learning (SRL) addresses these problems by modeling relations among objects and allowing multiple types of objects in the same model. This can greatly improve predictive accuracy and yield better understanding of the relevant phenomena, but it can also be much more complex than i.i.d. learning. In particular, inference becomes a key issue. However, there has been much progress in SRL in the last decade, and many mature techniques are now available.
This tutorial will consist of three parts. The first part is an overview of the four foundational areas of SRL: statistical learning, relational learning, probabilistic inference, and logical inference. The second part puts these pieces together, introducing the key ideas in SRL and state-of-the-art SRL algorithms. It uses Markov logic, a language that combines Markov networks and first-order logic, as the unifying framework. The third part shows how to develop SRL applications using the Alchemy open-source implementation of Markov logic, using examples from several of the areas above.
Pedro Domingos is Associate Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 150 technical publications. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at KDD-98, KDD-99, PKDD-05 and EMNLP-09.
Title: Markov Chain Mixing with Applications
Prof. Prasad Tetali , School of Mathematics and School of Computer Science, Georgia Tech.
The topic of Markov Chain Monte Carlo (MCMC) based algorithms has been
developed a great deal in the past couple of decades, primarily
motivated by nontrivial applications in computer science, statistics
and statistical physics. In these tutorial-style lectures, the
speaker will cover several elements of the modern theory of Markov
Starting with the basic concepts, a main focus will be on
probabilistic and analytic techniques to bound the time to reach
equilibrium (the so-called mixing time). The techniques include
coupling, spectral and entropy methods and canonical path based
Several examples and open problems will be described. Applications of
MCMC in the context of Machine Learning, Medium Access Control
protocols and Network Games will be discussed. Time permitting
connections between dynamical mixing and spatial mixing in the context
of spin systems of statistical physics as well as the geometry of
solutions to random instances of constraint satisfaction problems
(CSPs) will briefly be mentioned.
Prasad Tetali is a Professor in the School of Mathematics with a joint appointment in the School of Computer Science at Georgia Tech, where he has been since 1994. He received his PhD from the Courant Institute of
Mathematical Sciences, NYU, in 1991. He is recognized as a SIAM Fellow in 2009. He is the Editor-in-Chief of SIAM J. on Discrete Mathematics,
and is on the editorial board of several journals in probability and combinatorics. He is the current director of the Algorithms and Randomness Center (ARC) at Georgia Tech.
Title: All of Graphical Models
Prof. Jerry Zhu , Department of Computer Sciences, University of Wisconsin-Madison
This tutorial is a concise overview of probabilistic graphical models as we encounter them in machine learning. The goal is to string together all major aspects of graphical models. I assume little or fragmented knowledge in this area, and the tutorial is accessible to everyone with some machine learning experience. Topics include formalism (Bayesian Networks, Markov Random Fields, factor graphs), inference (message passing algorithms, exponential family and variational interpretation, Markov
Chain Monte Carlo), parameter learning (maximum likelihood and the Expectation Maximization algorithm), and structure learning (glasso and nonparanormal).
Jerry Zhu is an Associate Professor in Computer Science at the University of Wisconsin-Madison. He received his PhD from the School of Computer Science at Carnegie Mellon University in 2005, and was a recipient of the NSF CAREER award in 2010. He has given tutorials at ICML, ACL, and Chicago machine learning summer school.
Title: Performance Evaluation for Learning Algorithms
Prof.Nathalie Japkowicz , School of Electrical Engineering and Computer Science, University of Ottawa, Canada
Machine learning is now a mature field with many sophisticated learning approaches commonly used in a variety of applications. Because of its practical relevance, it has become of critical importance that researchers and practitioners alike be aware of both the proper methodologies and the respective questions that arise when evaluating learning approaches in either an experimental or a practical setting. This tutorial aims at educating as well as encouraging the machine-learning community to get involved in the discussion of these important issues. The tutorial will discuss major aspects of machine learning evaluation with a focus on classification algorithms. It will highlight the different questions, assumptions and constraints involved in the evaluation process. In particular, it will examine a number of techniques in great detail and discuss their relevance and shortcomings in different contexts. It will also present R and WEKA tools that can be used to assist with the process.
The tutorial will span four areas of machine learning evaluation:
- Performance Measures (Evaluation Metrics and Graphical Methods)
- Error Estimation/Re-sampling Techniques
- Statistical Significance Testing
- Issues in Data Set Selection and Evaluation Benchmarks Design
The slices of the talk can be found HERE. For the latest version, please visit Dr. Japkowicz's website at http://www.site.uottawa.ca/~nat/Talks/ICMLA-Tutorial.pptx.
is a Professor of Computer Science in the School of Electrical Engineering and Computer Science at the University of Ottawa. She received her Ph.D. in Computer Science from Rutgers University in 1999. She is vice-president of the Canadian Artificial Intelligence Association and was program co-chair of Canadian AI 2009.