Constraint programming (CP) has its origins in the 70s in the artificial intelligence community. CP won its first successes in the 90s by solving combinatorial application problems that mathematical programming approaches could not solve. Today, CP is broadly used, not only for its efficiency but also because CP models are compact, easy to understand, and easy to maintain when the problem to solve evolves.
The purpose of this course is to present the basic notions that help understanding the foundations of constraint programming (e.g., local consistencies, constraint propagation, global consistency, polynomial cases, global constraints, non-decomposibility) and to describe some of the main techniques that lead to efficient solvers (e.g., backtrack search, arc consistency, nogood learning, maintaining arc consistency, variable ordering heuristics, propagating global constraints).
The content of this course should help attendees to recognize why and when CP is the paradigm to be used.
Verification of Autonomous Robots, a Roboticist Bottom Up Approach
The complete validation and verification of the software of an autonomous robot (AR) is out of reach for now. Still, this should not prevent us from trying to verify some properties on components and their integration. There are many approaches to consider for the V&V of AR software, e.g. write high level specifications and derive them in correct implementations, deploy and develop new or modified V&V formalisms to program robotics components, etc.
Moreover, the large variety of software running on AR (from decisional components to functional components) require the use of specific and adapted approaches. For example, learned models put aside, most models used in deliberation functions  are amenable to formal V&V . Indeed these deliberation functions (planning, acting, monitoring, observing, interacting, etc) are often models based, and these models deployed for the specific deliberation function can often be used with V&V tools. In fact, often the approach of the V&V tool and the deliberation function are similar, (e.g. model checking, satisfiability). This does not mean that the V&V problem is solved for these decision functions, but there is already a lot which can be done with the existing models. The picture is not as bright for learned models (whose resulting models are also deployed at the functional level) where the V&V remains a challenge. There are some attempts which remain mostly “architectural”, i.e. keep the learned models out of critical decision, or not alone in the decision.
Last, we focus on theV&V of functional level components, which despite their similitude with embedded systems software has little been addressed in robotic. We propose an approach which rely on an existing robotics specification/implementation framework (GenoM) to deploy robotics functional components, to which we harness existing well known formal V&V frameworks (UPPAAL, BIP, FIACRE/TINA). GenoM was originally developed by roboticists and software engineers, who wanted to clearly and precisely specify! how a reusable, portable, middleware independent, functional component should be written and implemented. Many complex robotic experiments have been developed and deployed over 20 years using GenoM and it is only recently that its designers realized that the rigorous specification, a clear semantic of the implementation and the template mechanism to synthesize code opens the door to automatic formal model synthesis and formal V&V (offline and online). This bottom up approach, which starts from components implementation, may be more modest than the top down ones which aim at a larger and more global view of the problem. Yet, it gives encouraging results on real implementations on which one can build more complex high level properties to be then verified and validated offline but also with online monitors.
 Félix Ingrand. Recent Trends in Formal Validation and Verification of Autonomous Robots Software. IEEE International Conference on Robotic Computing, Feb 2019, Naples, Italy. ⟨hal-01968265⟩ https://hal.laas.fr/hal-01968265
 Félix Ingrand, Malik Ghallab. Deliberation for autonomous robots: A survey. Artificial Intelligence, Elsevier, 2017, 247 pp.10-44. ⟨http://www.sciencedirect.com/science/article/pii/S0004370214001350?via%3Dihub⟩. ⟨10.1016/j.artint.2014.11.003⟩. https://hal.laas.fr/hal-01137921
 Tesnim Abdellatif, Saddek Bensalem, Jacques Combaz, Lavindra de Silva, Félix Ingrand. Rigorous design of robot software: A formal component-based approach. Robotics and Autonomous Systems, Elsevier, 2012, 60 (12), pp.1563-1578. ⟨10.1016/j.robot.2012.09.005⟩. https://hal.laas.fr/hal-01980036
 Mohammed Foughali, Silvano Dal Zilio, Félix Ingrand. On the Semantics of the GenoM3 Framework. Rapport LAAS n° 19036. 2019. https://hal.laas.fr/hal-01992470
Introduction to data science & Hybrid learning and constraint reasoning (Tias Guns)
The course starts with a an introduction to data science based on 5 leading principles: that data science is a process; that looking too hard at a dataset finds things that don’t generalize to unseen data; that it needs to be evaluated in the context of operation; that similarity matters; that correlation is not causation.
In the second part, I give three recent examples of research in my lab where we combine machine learning (often deep learning) with constraint reasoning: preference learning in vehicle routing, perception-based visual sudoku and prediction + optimisation.
Logic Based Explainable AI (Alexey Ignatiev)
This talk will overview recent advances in logic-based approaches to explainable artificial intelligence (explainable AI, XAI). It will cover propositional satisfiability (SAT) based learning of minimal representations of a few interpretable models, i.e. decision trees and decision sets. The talk will also discuss the recent logic-enabled approach to computing rigorous explanations, i.e. those that hold for a given ML model and prediction over the entire instance space and, thus, are deemed trustable. This principled approach builds on abductive reasoning and relies on the use of reasoning engines such as SAT, satisfiability modulo theories (SMT), or mixed integer linear programming (MILP) solvers, and enables one not only to compute provably correct explanations for ML models but also to efficiently validate heuristic explanations of several popular explainers, assess their quality, as well as refine or repair them. Finally, the talk will discuss the relationship between explainable AI and ML model verification.
Automated Theorem Proving (Catherine Dubois)
Automated deduction, also called Automated Theorem Proving, aims at proving mathematical or non-mathematical theorems with the help of a computer. Proving means here showing that a theorem is a logical consequence of a set of hypotheses. Research in this field started in the 1960s, for example with the DPPL algorithm in 1962 for solving the SAT problem or Robinson’s resolution principle Robinson in 1965. Automated (ATP) and interactive theorem provers (ITP) were then developed and are today used to formalize large tracks of mathematics and verify complex hardware and software systems.
The course will start with an overview of some formal proof techniques and associated ATPs and ITPs. A focus in their use in deductive verificaton of programs and tools will follow. Both families of tools can be combined to obtain better and more confident results, the course presents some advances and projects to achieve interoperability of proof tools. The course will end with a quick presentation of some existing work applying machine learning techniques to automated theorem proving.