From jq@lirmm.lirmm.fr Mon Sep 12 10:36:29 1994 Received: from [193.49.104.48] ([193.49.104.48]) by lirmm.lirmm.fr (8.6.9/8.6.4) with SMTP id KAA03026; Mon, 12 Sep 1994 10:36:23 +0200 Message-Id: <199409120836.KAA03026@lirmm.lirmm.fr> X-Sender: jq@lirmm.lirmm.fr Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-Mailer: Eudora F1.4 Date: Mon, 12 Sep 1994 10:36:46 +0100 To: gascuel, hr, reitz, js, mephu, pierre, pompidor, vignal, cdlh, gracy, jappy From: Derek Sleeman (transmis par jq@lirmm.lirmm.fr (Joel Quinqueton)) Subject: SSS95 announcement Status: RO >From cox@cc.gatech.edu Tue Aug 30 00:11 BST 1994 ------------------------------------------------------------------------------ REPRESENTING MENTAL STATES AND MECHANISMS AAAI 1995 Spring Symposium Series March 27 - 29, 1995 Stanford University, Stanford, California CALL FOR PARTICIPATION The ability to reason about mental states and cognitive mechanisms facilitates performance at a variety of tasks. The purpose of this symposium is to enhance our ability to construct programs that employ commonsense knowledge of the mental world in an explicit representational format that can be shared across domains and systems. Such knowledge can, for example, assist story-understanding programs to understand characters that learn, forget, pay attention, make a decision, and change their mind. The need to represent knowledge of mental activity transcends usual disciplinary boundaries to include most reasoning tasks where systems interact with users, coordinate behaviors with autonomous agents, or consider their own beliefs and limitations. For example, distributed problem-solving agents can use knowledge of mental phenomena to predict and explain the behavior of cooperating agents. In machine learning, a system's knowledge of its own mental states, capacities and mechanisms crucially determines the reliability with which it can diagnose and repair reasoning failures. The focus of the symposium, however, is on representation of the mental world and the sharing/reuse of such representations, rather than the applications that such representations support. Important questions to consider: o (SHARABILITY) What tools / techniques can facilitate the sharing of representations among researchers? o (REUSE) What portions of the representation can be transferred across reasoning tasks? o (ARCHITECTURE) How can functional models of reasoning-components be represented explicitly? o (LOGICAL FORM) What statements can be logically asserted about the self and its beliefs? What benefits arise from such representations? o (APPLICATIONS) How can knowledge of mental phenomena be used in tasks ranging from student instruction to intelligent interface control? o (INTROSPECTION) What must an intelligent system know about its own mental states and processes? PLEASE MONITOR THE WEB FOR ADDITIONAL INFORMATION: ftp://ftp.cc.gatech.edu/pub/ai/symposia/aaai-spring-95/home_page.html The symposium will consist of invited talks, individual presentations, and group discussion. "Key position" papers describing possible topics for submitted papers will be available at the network address listed above. If you wish to present, submit up to 12 pages (fewer pages are encouraged) in 12-point, with 1" margins. Others interested in attending should submit a research abstract or position paper (3-pp. max). Financial assistance is available for student participation. Submit 1 postscript copy to freed@picasso.arc.nasa.gov or 4 hardcopies to Michael Freed, MS 262-2 NASA ARC, Moffett Field, CA, 94035. SUBMISSION DATES: Submissions for the symposia are due on October 28, 1994. Notification of acceptance will be given by November 30, 1994. Material to be included in the working notes of the symposium must be received by January 20, 1995. ORGANIZING COMMITTEE: Co-chairs - Michael Cox cox@cc.gatech.edu (Georgia Tech, AI/Cognitive-Science Group, College of Computing) Michael Freed freed@picasso.arc.nasa.gov (NASA Ames Research Center, Human Factors Group) Gregg Collins (Northwestern University, Institute of the Learning Sciences) Bruce Krulwich (Andersen Consulting, Center for Strategic Technology Research) Cindy Mason (NASA Ames Research Center, Artificial Intelligence Group) John McCarthy (Stanford University, Department of Computer Science) John Self (Lancaster University, Department of Computing) ------------------------------------------------------------------------------ REPRESENTING MENTAL STATES AND MECHANISMS (EXTENDED DESCRIPTION) The ability to reason about mental states, actions and mechanisms facilitates performance in a variety of tasks. Knowledge of this kind can, for example, enable story understanding programs to comprehend what is happening when characters learn, forget, decide on an action or change their mind in a given story. Distributed problem-solving agents can use knowledge of mental phenomena to predict and explain the behavior of cooperating agents. In machine learning, a system's knowledge of its own mental states, capacities and mechanisms crucially determines the reliability with which it can diagnose and repair reasoning failures. Tutoring systems and intelligent learning environments have been proposed that incorporate an explicit model of the learner's own reasoning, as well as the student's current knowledge state of the learning domain, in order to facilitate monitoring, feedback and assistance. Models of mental behavior and mechanisms play roles similar to those above in tasks ranging from interface control to inferring user preferences in software agents. Our ability to build such systems is thus enhanced by the explicit representation of mental states/activities and by reasoning about such representations. In formulating representations for a system that relies on such reasoning, a variety of fundamental issues must be taken into account. Some of these issues have been discussed in the artificial intelligence and philosophy of mind literature. For example, Marr (1982), Newell (1981) and Dennett (1987) have each distinguished several levels of abstraction at which mental phenomena can be reasoned about. Important questions remain concerning, for example, which level of abstraction is most appropriate for a given problem. However, despite substantial discussion of such fundamental questions and a variety of implemented systems that explore these problems (see below), the task of actually producing representations of mental phenomena remains difficult and labor-intensive. We therefore hope in this symposium to emphasize, not only basic representational issues, but the representations themselves. In particular, we would like to facilitate the sharing of representations between researchers from a range of AI subfields. Ideally, this sharing would lead to the creation of a large knowledge-base consisting of highly reusable representations to be used as a building block in the development various AI systems (cf. Hayes 1985). The symposium advances this goal in three ways. First, by emphasizing the problem of representing knowledge of mental states and mechanisms, we hope to encourage the submission of papers that focus on the representation of such self-knowledge and how a system reasons about this knowledge, rather than on the particular performance task that forms the context of such reasoning. Secondly, we plan to attract a well-known invited speaker to address the symposia participants on the matter of reuse directly. Finally, we propose a set of panel discussions to articulate the sharable content of their system's knowledge bases. Thus, participants would be encouraged to focus on the problems of reuse and sharability in discussing general issues relating to the representation of mental behavior. Symposium participants will be invited to discuss a variety of relevant issues: 1. There is clearly a strong relationship between the reasoning agents do about their own mental processes and their reasoning about the mental processes of others. How can knowledge of another's mental characteristics be used to more effectively cooperate with (or deceive) that agent? In what circumstances can self-knowledge be applied to reasoning about others? 2. In general, what aspects of cognition will be idiosyncratic and which will tend to be shared between individuals? 3. An agent must sometimes use a model of the mechanisms of mental activity to explain reasoning error and capacity limitations. When must an agent use a scientific model rather than a naive psychological model? How much detail about the mechanisms of mental action is necessary to advise another agent about mental strategies? How much detail is needed to adapt reasoning mechanisms? 4. Agents can reason at different levels of abstraction about mental behavior. By most accounts, three levels of abstraction exist: the implementation level (neurons or program statements), the design level (functional decomposition of decision-making mechanisms) and the intentional level (beliefs and goals). How much knowledge of each type must an agent have to reason effectively? How can an agent decide the most appropriate level of abstraction to reason at in a given situation? 5. What are the tradeoffs between interagent communication and computation related to reasoning about states and mental processes of other agents? The goals of this symposium are therefore in tune with the ARPA goals in the knowledge-sharing program. Currently, there is no effort in the area of representations for cognitive mechanisms and cognitive states addressed by the knowledge-sharing community. We hope to collaborate with the knowledge-sharing effort at Stanford, where the database for the knowledge-sharing effort is located. We intend to further this direction by examining the common features of representations that support self-knowledge. Evidence of existing interest: Much research has been independently conducted in areas related to our topic. A number of researchers have developed self-modeling systems that make use of explicit representations of mental phenomena to exploit constraints in performance tasks (e.g., RAPTER, Freed & Collins, 1994; Meta-AQUA, Ram & Cox, 1994; and CASTLE, Krulwich, 1993), to aid human learners (BRM, Self, 1992), and to facilitate collaborative problem solving (AOPL, Shoham, 1993; BBRL, Mason, 1994). Additionally, the logic community in AI has worked on encapsulating mental situations in formal contexts (including the context of a person's state of mind); mental states provide an outer context, and reasoning about one's own thoughts involve transcending the outer context (e.g., Guha, 1991; McCarthy, 1993). Recent interest in explicit representations of cognition has been evident from workshops and conferences on or relating to the subject in Japan, Europe, and the United States. In 1992, the IMSA-92 Workshop on Reflection and Metalevel Architectures (Tokyo) brought together numerous researchers all sharing an interest in systems that split reasoning into an object (domain) level and a meta-level that explicitly represents the reasoning at the object level. Many papers were also presented on explicit representations of mental phenomena. The European research community has demonstrated an extensive interest in the subject too. See for example the number of articles devoted to related topics in ECAI-92 and the ECML-93 Workship on Integrated Learning Architectures. Similar interest was also demonstrated at the 1994 AAAI Spring Symposium on Goal-Driven Learning where many of the participant were interested in not only the relationship of deliberate pursuit of goals in learning, but also integrated some type of metaknowledge into the systems. Furthermore, at the Sixteenth Annual Conference of the Cognitive Science Society this summer, a number of the papers to be presented will be on introspective reasoners, including both computational and psychological models. Additionally, the Workshop on Agent Theories, Architectures and Languages to be held in conjunction with ECAI-94 this year in Amsterdam will focus on related topics including agent languages (which allow representation of desires, beliefs, goals, agent specifications, and logical formulations of agents), thus directly relating to the notion of explicit representation of mental states and processes, in the context of Distributed AI. Yet none of the above-mentioned forums concentrate directly on the representation of self-knowledge and the modeling of mental mechanisms, nor has there been an effort to consolidate these representations across systems. ORGANIZING COMMITTEE: Gregg Collins Institute of the Learning Sciences Northwestern University 1890 Maple Av. Evanston, IL 60201-3142 collins@ils.nwu.edu Michael T. Cox (co-chair) AI / Cognitive Science Group College of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 cox@cc.gatech.edu Michael Freed (co-chair) Human Factors Group NASA Ames Research Center Moffett Field, CA 94035 freed@picasso.arc.nasa.gov Bruce Krulwich Center for Strategic Technology Research Ansersen Consulting 100 S. Wacker Drive Chicago, IL 60606 krulwich@andersen.com Cindy Mason Artificial Intelligence Group NASA Ames Research Center Moffett Field, CA 94035 mason@ptolemy-ethernet.arc.nasa.gov John McCarthy Department of Computer Science Stanford University Stanford, CA 94305 jmc@cs.stanford.edu John Self Department of Computing University of Lancaster Lancaster, LA1 4YR, UK jas@comp.lancs.ac.uk Our committee includes seven researchers drawn from academic, government and industry research labs, and representing several AI subfields including: machine learning (Cox, Freed), planning (Collins), software agents (Krulwich), distributed problem-solving (Mason), logic (McCarthy), and intelligent learning environments (Self). REFERENCES: Dennett, D. (1987) The intentional stance. Cambridge, MA: MIT Press/Bradford Books. Freed, M., & Collins, G. (1994). Adapting routines to improve task coordination. In K. Hammond (Ed.), Proceedings of the Second International Conference on Artificial Intelligence Planning Systems (pp 255-260). San Mateo, CA: AAAI Press. Guha, R. V. (1991). Contexts: A formalization and some applications. Unpublished doctoral dissertation, Stanford Universtity. Hayes, P. (1985). The second naive physics manifesto. In J. R. Hobbs & R. C. Moore (Eds.), Formal theories of the commonsense world (pp xi-xxii). Norwood, NJ: Ablex. Krulwich, B. (1993). Flexible learning in a multicomponent planning system. Unpublished doctoral dissertation. The Institute of the Learning Sciences, Northwestern University (Tech. Rep. No. 46). Marr, D. (1982). Vision. San Francisco: W. H. Freeman. Mason, C. (1994). ROO: A DAI toolkit for building cooperative assumption-based reasoning systems. In Proceedings of the Third International Conference on Cooperative Knowledge Based Systems, Keele, England. McCarthy, J. (1993). Notes on formalizing context. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence. San Francisco: Morgan Kaufmann. Newell, A. (1981). The knowledge level. AI Magazine, 2(2): 1-20. Ram, A., & Cox, M. T. (1994). Introspective reasoning using meta-explanations for multistrategy learning. In R. S. Michalski & G. Tecuci (Eds.), Machine learning: A multistrategy approach IV (pp. 349-377). San Mateo, CA: Morgan Kaufmann. Self, J. (1992). BRM - A framework for addressing metacognitive issues in intelligent learning environments. In J. W. Brahan & G. E. Lasker (Eds.), Proceedings of the Sixth International Conference on Systems Research, Informatics and Cybernetics (Vol. 2, pp. 85-90). Windsor, Ontario: The International Institute for Advanced Studies in Systems Research and Cybernetics. Shoham, Yoav (1993). Agent-oriented programming. Artificial Intelligence Journal, 60: 51-92.