Keynote speakers

The keynote speakers for ICMLA 2013 are:

Dr. David W. Aha - Enabling the Freedom of Choice: Machine Learning for Goal Reasoning


Suppose we wanted an agent to dynamically choose its objectives in a complex environment. How should it be designed, what benefits could accrue, and what tradeoffs would require attention? In this talk, I will describe the status of our research on this topic, which we call goal reasoning. It has received attention in several disciplines (e.g., cognitive architectures, game AI, meta-reasoning, planning, robotics) and is emerging as a critical core component of intelligent autonomous systems (e.g., for the control of unmanned systems, intelligent simulated adversaries, or situation awareness tools). Furthermore, competent models of goal reasoning require solving several machine learning tasks. I will describe these tasks, summarize how they have been addressed, and identify those that require substantial further attention.


David W. Aha received his PhD from UCI in 1990 (on Instance-Based Learning) and held post-doctoral positions at the Turing Institute, the Johns Hopkins University, and the University of Ottawa. He currently leads the Adaptive Systems Section within NRL's Navy Center for Applied Research in AI at Washington, DC. In addition to ML, his research interests include autonomous agents, case-based reasoning, game AI, mixed-initiative reasoning, planning, text analysis, and trust. He has received 3 Best Paper awards, serves on journal editorial boards (ML, APIN, TIST, ACS), co-edited 3 special issues, and participated on 20 dissertation committees. He founded the UCI Repository for Machine Learning Databases, served as a AAAI Councillor, co-founded the AI Video Competitions, and co-organized several international meetings related to AI (e.g., the AAAI-10 Workshop on Goal Directed Autonomy). He's particularly excited about goal reasoning, a topic that he has pursued since 2009 with many talented colleagues.

Dr. Pedro Domingos - Sum-Product Networks: Deep Models with Tractable Inference


Big data makes it possible in principle to learn very rich probabilistic models, but inference in them is prohibitively expensive. Since inference is typically a subroutine of learning, in practice learning such models is very hard. Sum-product networks (SPNs) are a new model class that squares this circle by providing maximum flexibility while guaranteeing tractability. In contrast to Bayesian networks and Markov random fields, SPNs can remain tractable even in the absence of conditional independence. SPNs are defined recursively: an SPN is either a univariate distribution, a product of SPNs over disjoint variables, or a weighted sum of SPNs over the same variables. It's easy to show that the partition function, all marginals and all conditional MAP states of an SPN can be computed in time linear in its size. SPNs have most tractable distributions as special cases, including hierarchical mixture models, thin junction trees, and nonrecursive probabilistic context-free grammars. I will present generative and discriminative algorithms for learning SPN weights, and an algorithm for learning SPN structure. SPNs have achieved impressive results in a wide variety of domains, including object recognition, image completion, collaborative filtering, and click prediction. Our algorithms can easily learn SPNs with many layers of latent variables, making them arguably the most powerful type of deep learning to date. (Joint work with Rob Gens and Hoifung Poon.)


Dr. Pedro Domingos is a leader in the fields of machine learning, artificial intelligence and data mining. He is a professor of computer science at the University of Washington in Seattle, and received his PhD from the University of California at Irvine. He is the author or co-author of over 200 technical publications, and has received numerous awards for his research. He is a co-founder of the International Machine Learning Society, was program co-chair of the 2003 ACM International Conference on Knowledge Discovery and Data Mining, and has served on numerous program committees and journal editorial boards.

Dr. Dan Moldovan - Learning Text Semantics



Text understanding has been a long time goal of artificial intelligence. Recent advances in computational linguistics, in particular lexical semantics, coupled with probabilistic inference allow us to reveal explicit and implicit meaning of text.

In this talk we present an unsupervised, linguistically informed method called semantic calculus that yields new inferred relations by composing elementary relations provided by syntactic and semantic parsers. A Markov Logic Network implementation is used for generating probabilistic inferences. Results of the approach for textual entailment, sentence similarity and paraphrasing are discussed.


Dr. Dan Moldovan is a Professor of Computer Science at the University of Texas at Dallas. He is the Founder and Co-Director of the Human Language Technology Research Institute at UTD. His publications include more than 300 papers in Natural Language Processing, Artificial Intelligence and Distributed Processing. He earned a PhD from Columbia University and held faculty positions at the University of Southern California and Southern Methodist University. He worked at Bell Laboratories and served as Program Director at NSF.