[AISWorld] [ExaCt 2012] Explanation-aware Computing, ECAI 2012 workshop, Call for papers

Thomas Roth-Berghofer Thomas at Roth-Berghofer.de
Mon Mar 26 04:55:46 EDT 2012


** Apologies if you receive multiple copies of this announcement **
      ** Please forward to anyone who might be interested **

                     FIRST CALL FOR PAPERS
                 for the 7th International and
                      ECAI 2012 Workshop on

            EXPLANATION-AWARE COMPUTING (ExaCt 2012)
    One-Day Workshop, 27 or 28 August 2012, Montpellier, France
                  http://exact2012.workshop.hm

             ** Submission deadline: May 28, 2012 **

When knowledge-based systems are partners in interactive socio-
technical processes, with incomplete and changing problem descriptions, 
effective communication between human and software system is vital.
Explanations exchanged between human agents and software agents may 
play a key role in such mixed-initiative problem solving. For 
example, explanations may increase the confidence of the user in
specific results or in the system as a whole, by providing evidence of 
how the results were derived. AI research has also focused on 
how computer systems can themselves use explanations, for example to 
guide learning.

Explanation-awareness in computing system development aims at making 
systems able to interact more effectively or naturally with their 
users, or better able to understand and exploit knowledge about their 
own processing. Systems intended to exhibit explanation-awareness must 
be more than simple reactive systems. When the word 'awareness' is 
used in conjunction with the word 'explanation' it implies some 
consciousness about explanation and reasoning about explanations at 
the knowledge level.

Thinking of the Web not only as a collection of web pages, but as 
providing a Web of experiences exchanged by people on many platforms, 
gives rise to new challenges and opportunities to leverage experiential 
knowledge in explanation.  For example, records of experiences on the 
Web and interrelationships between experiences may provide provenance 
and meta-data for explanations and can provide examples to help instil 
confidence in computing systems. The interplay of provenance information 
with areas such as trust and reputation, reasoning and meta-reasoning, 
and explanation are known, but not yet well exploited.

Outside of artificial intelligence, disciplines such as cognitive 
science, linguistics, philosophy of science, psychology, and education 
have investigated explanation as well. They consider varying aspects, 
making it clear that there are many different views of the nature of 
explanation and facets of explanation to explore. Two relevant examples 
of these are open learner models in education, and dialogue management 
and planning in natural language generation.

The ExaCt workshop series aims to draw on the multiple perspectives on 
explanation, to examine how explanation can be applied to further the 
development of robust and dependable systems, and increase transparency, 
user sense of control, trust, acceptance, and decision support.


GOALS AND AUDIENCE
The main goal of the workshop is to bring together researchers, 
scientists from both industry and academia, and representatives from 
different communities and areas such as those mentioned above, to study, 
understand, and explore explanation in AI applications. In addition to 
presentations and discussions of invited contributions and invited talks, 
this workshop will offer organised and open spaces for targeted 
discussions and creating an interdisciplinary community. Demonstration 
sessions will provide the opportunity to showcase explanation-enabled/
-aware applications.


TOPICS OF INTEREST
Suggested topics for contributions (not restricted to IT views):
* Models and knowledge representations for explanations
* Integrating application and explanation knowledge
* Explanation-awareness in (designing) applications
* Methodologies for developing explanation-aware systems
* Explanations and learning
* Context-aware explanation vs. explanation-aware context
* Confidence and explanations
* Privacy, trust, and explanation
* Provenance and metareasoning
* Empirical studies of explanations
* Requirements and needs for explanations to support human understanding
* Explanation of complex, autonomous systems
* Co-operative explanation
* Visualising explanations
* Dialogue management and natural language generation

Submissions on additional topics are very welcome.


SUBMISSIONS AND STYLE
Workshop submissions will be electronic, in pdf format only, using
the EasyChair submission system linked from the workshop website.

Papers must be written in English and not exceed 5 pages in the
ECAI format. At least one author of each accepted paper must register 
for the workshop and the ECAI conference and present the contribution 
in order to be published in the workshop proceedings. The organising 
committee is considering editing a special issue of an appropriate 
international journal depending on the number and quality of the 
submissions.

Those wishing to participate without a paper submission should 
submit a brief synopsis of their relevant work or a brief statement 
of interest.

The workshop proceedings will be published online on the ECAI website
and as CEUR workshop proceedings (http://ceur-ws.org).

If you have questions please contact the chairs using the following
email address: chairs at exact2012.workhop.hm.


IMPORTANT DATES
Submission deadline:                 May 28, 2012
Notification of acceptance:          June 28, 2012
Camera-ready versions of papers:     July 13, 2012
ExaCt Workshop:                      August 27/28, 2012


WORKSHOP SCHEDULE
The schedule will be made available on the workshop website. See the
workshop website for an agenda overview and links to past workshops.


CHAIRS
Thomas Roth-Berghofer, School of Computing and Technology, 
University of West London, United Kingdom
thomas.roth-berghofer (at) uwl ac uk

David B. Leake, School of Informatics and Computing, 
Indiana University, USA
leake (at) cs indiana edu

Jörg Cassens, Institute for Multimedia and Interactive 
Systems (IMIS), University of Lübeck, Germany
cassens (at) imis uni-luebeck de


PROGRAMME COMMITTEE
Agnar Aamodt, Norwegian University of Science and Technology (NTNU)
David W. Aha, Navy Center for Applied Research in AI, Washington DC, USA
Martin Atzmüller, University of Kassel, Germany
Ivan Bratko, University of Ljubljana, Slovenia
Patrick Brézillon, LIP6, France
Ashok Goel, Georgia Tech University, Atlanta, GA, USA 
Pierre Grenon, KMI, The Open University, UK 
Anders Kofod-Petersen, SINTEF, Norway
Hector Muñoz-Avila, Lehigh University, USA
Miltos Petridis, University of Brighton, UK
Enric Plaza, IIIA-CSIC, Spain 
Christophe Roche, University of Savoie, France
Olga Santos, Spanish National University for Distance Education
Gheorghe Tecuci, George Mason University, Fairfax, VA, USA
Douglas Walton, University of Windsor, Canada

THE YAHOO EXPLANATION GROUP

If you would like to participate in discussions on the topic of
explanation or like to receive further information about this workshop
you might consider joining the Yahoo!-group
  http://groups.yahoo.com/group/explanation-research. 
Information on explanation research is also collected here:
  http://on-explanation.net.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.aisnet.org/pipermail/aisworld_lists.aisnet.org/attachments/20120326/dd422df3/attachment.sig>


More information about the AISWorld mailing list