Home

Tutorials

Tutorials Venue

Michael Beetz
University of Bremen
Cognitive Robotics. 
Monday Morning, 27th, Polytech SC104
Slides

Knowledge processing and reasoning for robotic agents performing everyday manipulation

The tutorial will describe knowledge processing and reasoning methods
that are embodied into an autonomous robot in order to perform
everyday manipulation actions such as cleaning up, preparing meals,
and setting a table more competently. The tutorial will cover:

- requirements for robot knowledge processing and reasoning,
- semantic robot description language,
- reasoning with the execution time data structures of control systems,
- issues in translating web instructions into robot action plans,
- integrating knowledge processing and perception,
- representing and acquiring semantic, object-based environment maps,
- prediction-based action parameterization,
- simulation-based plan projection, and
- probabilistic reasoning mechanisms.

The tutorial is accompanied with various opensource software tools
that can be obtained from ias.cs.tum.edu/research/knowledge (KnowRob),
ias.cs.tum.edu/research/cram (CRAM plan language), and
ias.cs.tum.edu/research/probcog (PROBCOG) including download
instructions and tutorials.

Peter Flach
University of Bristol, 
Monday Afternoon, 27th, Building 5 Amphi 5.02
Slides
Unity in diversity: the breadth and depth of Machine Learning explained for AI researchers. Monday Afternoon, 27th

Machine learning is one of the most active areas in artificial intelligence, but the diversity of the field can be intimidating for newcomers. The aim of this tutorial is to do justice to the field’s incredible richness without losing sight of its unifying principles. I will discuss three main families of machine learning models: logical, geometric, and probabilistic. Unity is achieved by concentrating on the central role of tasks and features. An innovative use of ROC plots provides further insights into the behaviour and performance of machine learning algorithms.

 

Christophe Lecoutre and Olivier Roussel
University of Artois
Constraint Reasoning.
Tuesday Afternoon, 28th, Building 5. Amphi 5.02

At the heart of constraint reasoning, inference methods play a central role, by typically reducing the size of the search space through filtering algorithms.
In this tutorial, besides some fundamental recalls about modelling and search, we shall overview the most successful inference algorithms, for a wide range of constraint frameworks.
For CSP (Constraint Satisfaction Problem), we  shall present general-purpose algorithms for enforcing the property known as GAC (Generalized Arc Consistency), as well as specific GAC algorithms for table constraints and some global constraints.
For WCSP (Weighted Constraint Satisfaction problem), we shall introduce the basic and more recent  approaches based on cost transfer.
For SAT (Satifiability problem), we shall recall the usual Unit Propagation and the inference techniques that are used to improve its power. We shall focus on the link with GAC on CSP.
For pseudo-Boolean constraints, we shall present the cutting plane inference system and establish links with SAT and the other consistency algorithms.

 

The slides of the tutorial will be available soon:

 

Eyke Hüllermeier and Johannes Fuernkranz
Universität Marburg and TU Darmstadt
Preference Learning.
Tuesday Afternoon, 28th, Building 5. Amphi 5.03

The primary goal of this tutorial is to survey the field of preference learning in its current stage of development. The presentation will focus on a systematic overview of different types of preference learning problems, methods and algorithms to tackle these problems, and metrics for evaluating the performance of preference models induced from data.
More details can be found at  http://www.ke.tu-darmstadt.de/events/PL-12/tutorial.html

 

 

Andreas Krause, Stefanie Jegelka
ETH Zurich, UC Berkeley
Submodularity in Artificial Intelligence.
Monday Morning, 27th, Polytech SC005

Many problems in AI are inherently discrete. Often, the resulting discrete optimization problems are computationally extremely challenging. While convexity is an important property when solving continuous optimization problems, submodularity, also viewed as a discrete analog of convexity, is closely tied to the tractability of many problems: Its structure is key to solving many discrete optimization problems. Even more, the characterizing property of submodular functions, diminishing marginal returns, emerges naturally in various settings and is a rich abstraction for a myriad of problems.
Long recognized for its importance in combinatorial optimization and game theory, submodularity is now appearing in an increasing number of applications in AI, in particular in machine learning and computer vision. These include probabilistic inference, structure learning, sparse representation and reconstruction, unsupervised and active learning, optimized information gathering, summarization and influence maximization. Recent work extends submodular optimization to sequential decision making under uncertainty, and addresses learning of submodular functions, combinatorial problems with submodular loss functions and more efficient optimization algorithms. The wide range of applications and of theoretical questions make submodularity a relevant and interesting topic to many researchers in AI.

This tutorial introduces AI researchers to the concept of submodular functions, their optimization, applications and recent research directions. Illustrative examples and animations will help develop an intuition for the concept and algorithms. The tutorial aims at providing an overview of existing results that are important to AI researchers, discuss AI applications with an emphasis on machine learning and computer vision, and will provide pointers to further, detailed resources. Sample older slides are available at submodularity.org. The tutorial will also cover new results and directions from the past five years.

The tutorial will be divided into four sections:
1. What is submodularity and what is special about it? Is my problem submodular?
2. What are example applications of submodular maximization and minimization?
3. What algorithms exist for optimizing submodular functions?
4. What are new directions?

Organizers’ and presenters’ expertise Both authors have considerable expertise in the area. Andreas Krause, assistant professor at ETH Zurich, has investigated several aspects of submodularity in machine learning, with a particular emphasis on optimized information gathering and active learning. His work in this area has won awards at several conferences (KDD, ICML, UAI, AAAI, IPSN). He has previously held tutorials at ICML, IJCAI and LION. Stefanie Jegelka is a postdoctoral researcher at UC Berkeley. Her Ph.D. thesis introduces submodular costs into combinatorial minimization problems in machine learning and computer vision, and addresses practical algorithms and online learning with submodular functions. The authors have also organized a series of workshops on Discrete Optimization in Machine Learning (DISCML) at the annual NIPS conference.

 

 

Leon Van Der Torre
University of Luxembourg
Logics for multi-agent systems.
Monday Morning, 27th, Polytech SC003
Slides

A variety of logics is used to reason about multiagent systems. For example, temporal logic has been imported from computer science, in particular ATL to reason about the powers of agents, and extended with modalities for cognitive attitudes. More recently logics for agent interaction have become popular, such as argumentation theory for dialogues, and deontic logic for coordination. In this tutorial we give an overview of logics used in multiagent systems, and discuss their combination and interaction. We illustrate the combination of multiagent logics with an example from agreement technologies.

 

 

Francesca Rossi, Kristen Brent Venable, Toby Walsh
University of Padova Italy, Tulane University and IHMC, NICTA Australia &University of New South Wales
Preference reasoning and aggregation.
Tuesday Morning, 28th, Building 5. Amphi 5.02
Slides

Preferences are ubiquitous in everyday decision making. They are therefore an essential ingredient of many reasoning tools. This tutorial will start by presenting the main approaches to model and reason with preferences, such as soft constraints and CP-nets.
We will also consider issues such as preference elicitation and various forms of uncertainty given by missing,
imprecise, or vague preferences.
We will then consider multi-agent settings, where several agents express their preferences over common objects and the system should aggregate such preferences into a single satisfying decision. In this setting, we will exploit notions and results from different fields, such as social choice, matching, and multi-criteria decision making.

Intended audience:
The tutorial is meant for students as well as for researchers which may be interested in an introduction to preference modelling and reasoning. The target also includes researchers from several AI fields interested in understanding the applications of preferences to their area.
No specific prerequisite knowledge is essential.Some general knowledge of AI and computational complexity is advisable.