[AISWorld] CFP: Explainable Artificial Intelligence (XAI) mini-track at HICSS 57

Babak Abedin babak.abedin at gmail.com
Mon May 13 08:17:02 EDT 2024


Dear colleagues,



We are happy to introduce the 5th minitrack on “*Explainable Artificial
Intelligence (XAI)*” at *HICSS 57* (submission deadline: *June 15th*,
2024). We provide the opportunity for the (extended) best paper of this
minitrack to be fast-tracked to the journal Information Systems Management
(ISM).



If you have any questions, please do not hesitate to contact us.



Best regards,

Christian Meske, Babak Abedin, Maximilian Förster, Yang Song





************************************************************************

*Call for Papers: “Explainable Artificial Intelligence (XAI)” Minitrack at
the 58th Hawaii International Conference on System Sciences (HICSS)*

************************************************************************

The use of Artificial Intelligence (AI) in the context of decision
analytics and service science has received significant attention in
academia and practice alike. The rapid dissemination of generative AI,
particularly large language models, has contributed to the urgency of
addressing the question of how AI will and should influence the future of
work and daily life. One central challenge in the use of AI systems is
their complexity. AI systems are still “black boxes” that are difficult to
comprehend – not only for developers, but particularly for users and
decision makers. In addition, the development and use of AI is associated
with many risks and pitfalls like biases in data or predictions based on
spurious correlations (“Clever Hans” phenomena), which eventually may lead
to malfunctioning or biased AI and hence technologically driven
discrimination.

This is where research on Explainable Artificial Intelligence (XAI) comes
in. Also referred to as “transparent,” “interpretable,” or “understandable
AI”, XAI aims at producing explainable AI systems, while maintaining a high
level of learning performance (prediction accuracy); thereby empowering
human stakeholders to understand, appropriately trust, and effectively
manage the emerging generation of intelligent systems (Arrieta et al.
2020). XAI hence refers to “the movement, initiatives, and efforts made in
response to AI transparency and trust concerns, more than to a formal
technical concept” (Adadi and Berrada 2018, p. 52140). One key challenge of
XAI is to provide meaningful explanations for humans that effectively shape
human-AI interaction, such as impacting task performance of users.

With a focus on decision support, this minitrack aims to explore and extend
research on how to establish explainability of intelligent black box
systems – may they be generative or predictive, machine learning-based or
not. We especially look for contributions that investigate XAI from users’,
developers’, or governments’ perspectives. We invite submissions from all
application domains, such as healthcare, finance, e-commerce, retail,
public administration or others. Technically and method-oriented studies,
case studies as well as design science or behavioral science approaches are
welcome.

Topics of interest include, but are not limited to:

·         The users’ perspective on XAI

o   Organizational implications of XAI

o   Theorizing XAI-human interactions

o   Presentation and personalization of AI explanations for different
target groups

o   XAI to increase situational awareness, compliance behavior, and task
performance

o   XAI for transparency and unbiased decision making

o   XAI to foster reflections and learning

o   Explainability of AI in crisis situations

o   Explainability of generative AI

o   Potential harm of explainability in AI

o   Mental models and cognitive biases associated with explainability of AI



·         The developers’ perspective on XAI

o   XAI to open, control and evaluate black box algorithms

o   Using XAI to identify bias in data and algorithms

o   Explainability and Human-in-the-Loop development of AI

o   XAI to support interactive machine learning

o   Prevention and detection of deceptive AI explanations

o   XAI to discover deep knowledge and learn from AI

o   Designing and deploying XAI systems

o   Neuro-symbolic learning for XAI



·         The organizationals’ and governments’ perspective on XAI

o   XAI and compliance

o   XAI and AI governance

o   Explainability and AI policy guidelines such as AI Acts

o   Evidence base benefits and challenges of XAI expectations and
implementations

o   Ethical AI and GenAI frameworks and regulatory expectations





*Submission Deadline: *

June 15th, 2024

Further information for authors: https://hicss.hawaii.edu/authors/



*Fast track:*

We provide the opportunity for the (extended) best paper of this minitrack
to be fast-tracked to the journal Information Systems Management (ISM)



*Minitrack Co-Chairs:  *

Christian Meske

Ruhr-Universität Bochum



Babak Abedin

Macquarie University



Maximilian Förster

University of Ulm



Yang Song

University of New South Wales



More information about the AISWorld mailing list