[AISWorld] Electronic Markets: CfP special issue on "Explainable and responsible artificial intelligence" - May 31 2022
Babak Abedin
babak.abedin at gmail.com
Wed Apr 27 17:13:17 EDT 2022
*Guest editors*
- Christian Meske, Ruhr-Universität Bochum, Germany,
christian.meske at rub.de
- Babak Abedin, Macquarie University, Australia, babak.abedin at mq.edu.au
- Mathias Klier, University of Ulm, Germany, mathias.klier at uni-ulm.de
- Fethi Rabhi, University of New South Wales, Australia,
f.rabhi at unsw.edu.au
*Theme*
Today’s algorithms already reached or even surpassed the task performance
of humans in various domains. Especially, AI plays a central role for the
interaction between organizations and individuals such as their customers,
transforming for instance electronic commerce or customer relationship
management. However, most AI systems are still “black boxes” that are
difficult to comprehend—not only for developers, but also for consumers and
decision-makers (Meske, Bunde, Schneider and Gersch 2020). With regards to
electronic markets, problems such as trying to manage the risk and ensure
regulatory compliance of electronic trading systems based on machine
learning stem not only from their data-driven nature and technical
complexity, but also from their black-box nature, where the “learning”
creates non-transparent dependencies between inputs and outputs (Cliff and
Treleaven 2010). This raises many challenges such as ensuring data quality
issues, managing provenance information needed for transparency as well as
organizing metadata when combining data from multiple sources (Rabhi,
Mehandjiev and Baghdadi 2020). Thus, a responsible and more trustworthy AI
is demanded (HLEG-AI 2019; Thiebes, Lins and Sunyaev 2020).
This is where research on Explainable Artificial Intelligence (XAI) comes
in. Also referred to as “interpretable”, “responsible”, or “understandable
AI”, XAI aims to “produce explainable models, while maintaining a high
level of learning performance (prediction accuracy); and enable human users
to understand, appropriately, trust, and effectively manage the emerging
generation of artificially intelligent partners” (DARPA 2017). XAI hence
refers to “the movement, initiatives, and efforts made in response to AI
transparency and trust concerns, more than to a formal technical concept”
(Adadi and Berrada 2018, p. 52140). XAI is designed user-centric in that
users are empowered to scrutinize AI (Förster, Klier, Kluge and Sigler
2020). Overall, XAI supports to evaluate, to improve, to learn from, and to
justify AI, in order to eventually be able to manage AI (Meske, Bunde,
Schneider and Gersch 2020).
With a focus on the transformation of electronic markets, in this special
issue, we intend to explore and extend research on how to establish
explainability and responsibility in intelligent black box systems—machine
learning-based or not. On that account, we invite researchers to submit
their papers from all application domains, such as e-commerce, customer
relationship management, healthcare, finance, retail, public administration
or others.
*Central issues and topics*
This special issue of the Electronic Markets Journal will focus on new,
innovative approaches to explainable and responsible AI systems that will
change/improve the interaction between organizations and individuals. They
should discuss how their approaches and solutions enable enhanced ways of
information exchange, decision-making, and service science. Technically and
method-oriented studies, case studies as well as design science or
behavioral science approaches are welcome.
This special issue is not only intended for academics and researchers but
will also be valuable for executives, managers, innovators and project
leaders who would like to implement explainable and responsible AI systems.
The (non-exclusive) list of topics includes:
- Designing and deploying XAI systems in electronic markets
- XAI to foster trust in AI-based buyer-seller interactions (e.g.,
chatbots, recommender systems)
- Addressing user-centric requirements for XAI systems
- Addressing the responsibility of AI systems
- Explainability as a prerequisite for responsible AI systems
- Impact of explainability on AI-based digital platform use and adoption
- Prevention and detection of deceptive AI explanations
- XAI to discover deep knowledge and learn from AI
- Presentation and personalization of AI explanations for different
target groups
- XAI to increase situational awareness and compliance behavior
- XAI for transparency and unbiased decision making
- Potential harm of explainability in AI
- Explainability and responsibility policy guidelines
- XAI and ethics
*Submission*
Electronic Markets is a Social Science Citation Index (SSCI)-listed journal
(IF 4.765 in 2020) in the area of information systems. We encourage
original contributions with a broad range of methodological approaches,
including conceptual, qualitative and quantitative research. Please also
consider position papers and case studies for this special issue. All
papers should fit the journal scope (for more information, see
www.electronicmarkets.org/about-em/scope/) and will undergo a double-blind
peer-review process. Submissions must be made via the journal’s submission
system and comply with the journal’s formatting standards. The preferred
average article length is approximately 8,000 words, excluding references.
If you would like to discuss any aspect of this special issue, you may
either contact the guest editors or the Editorial Office.
More information about the AISWorld
mailing list