[AISWorld] Deadline Approaching BISE Special Issue on "Generating Tomorrow's Me: How Collaborating with Generative AI Changes Humans"

Jussupow, Ekaterina ekaterina.jussupow at tu-darmstadt.de
Fri Sep 20 02:44:16 EDT 2024


Dear colleagues,

Please consider the deadline for the following call for papers at Business & Information Systems Engineering (BISE). *** Submission Deadline: 15.10.2024 *** https://www.bise-journal.com/?p=2174
We look forward to all submissions!

*** Topic: Generating Tomorrow’s Me: How Collaborating with Generative AI Changes Humans***

Motivation:
Generative artificial intelligence (GenAI) is artificial intelligence that uses generative models to create text, images, or other data (e.g., Banh and Strobel 2023; Feuerriegel et al. 2024). GenAI learns the patterns and structure of its provided training data and then, typically in response to textual inputs (i.e., prompts), generates new, synthetic data with similar characteristics. The recent boom around GenAI specifically emerged at the beginning of the 2020s with the rise of large language models in the form of chatbots, such as ChatGPT, Copilot, and Bard, and text-to-image transformers, such as Stable Diffusion, Midjourney, and DALL-E. Given the various applications across a wide range of industries and use cases, companies such as OpenAI, DeepL, Microsoft, Google, and Baidu have developed their own GenAI and further accelerated the development and dissemination of GenAI (e.g., Teubner et al. 2023). The predicted impact of GenAI is enormous. It is expected that GenAI will generate over 600 billion dollars in revenue by 2030 (Fortune 2023), affecting up to 80% of current jobs (Eloundou et al. 2023).
Human-GenAI collaboration studies how humans and GenAI agents work together to accomplish a human-desired goal (e.g., Anthony et al. 2023; Baptista et al. 2020; Jarvenpaa and Klein 2024). GenAI can aid humans in various domains, from decision-making tasks over idea generation and innovation to art creation (e.g., Benbya et al. 2024). GenAI, in collaboration with humans, generates output such as code, text, images, or videos in response to prompts. Humans can then use this output to elevate their capabilities and improve desired outcomes, i.e., they can become more productive in creative (Zhou and Lee 2023) or coding tasks (Peng et al. 2023). However, it may also lead to problematic and unclear consequences, such as a decrease in overall creativity (Zhou and Lee 2023) or the dissemination of misinformation across organizations and digital platforms (e.g., Sabherwal and Grover 2024; Susarla et al. 2023; Wessel et al. 2023).
This call for paper focuses on one of these critical consequences, namely how humans will change due to their collaborations with GenAI. These changes can apply to various objects of analysis, such as human cognitive processes, perceptions, emotions, beliefs, and behaviors toward GenAI systems or towards other humans. How individuals learn, adapt, and influence others through AI collaboration has gained recognition in existing research on human collaboration with non-generative, predictive AI. Examples of domains are medicine (e.g., Jussupow et al. 2021; Jussupow et al. 2022), sales (e.g., Adam et al. 2021; Adam et al. 2023; Gnewuch et al. 2023), system development (e.g., Adam et al. 2024), and non-specialized image classification (e.g., Fügener et al. 2021; Fügener et al. 2022). In this vein, previous studies have discussed, for instance, that people change their own beliefs through processing the explanations of AI (e.g., Bauer et al. 2021; Bauer et al. 2023), adapt their behavior in response to observing AI predictions about themselves (e.g., Bauer and Gill 2023), become more selfish in their interaction with AI systems compared to their interaction with humans (March 2021) or develop more negative attitudes towards algorithmic versus human errors (e.g., Burton et al. 2020; Berger et al. 2021; Jussupow et al. 2020). Yet, dedicated studies on how GenAI – and its particularities – affect humans collaborating with it are only beginning to emerge.

Focus and Possible Topics:
The focus of this call for papers is to stimulate innovative research on how humans change due to their collaborations with GenAI. While the within-individual changes (e.g., regarding cognitive processes, perceptions, emotions, beliefs, and behaviors) are of primary interest, we also invite submissions at the group or organizational levels with reference to the individual level. Human-GenAI collaborations in either professional or private contexts should be the research setting.
Papers that focus solely on GenAI without a focus on collaborations with humans or human changes are not the focus of this call for papers. Further, papers that focus only on human perceptions or consequences of collaborations with GenAI (e.g., user satisfaction, acceptance, or performance changes due to collaborating with GenAI) without a deeper investigation of human changes are also outside the scope of this CfP.

Possible research areas include, but are not limited to:

  *   Creativity and Innovation: Changes in the creative processes of humans through the automatic creation or curation of text and images through GenAI
  *   Communication and Personalization: Changes in the communication styles of humans to the GenAI (e.g., prompts, politeness of their expressions)
  *   Errors and Biases: Humans adopting errors and biased worldviews due to misinformation (e.g., hallucinations) or over-reliance on GenAI outputs
  *   Learning and Competencies: Erosion and elevation of human skills due to the capabilities of GenAI
  *   Aversion and Appreciation: Changing relationships with other humans or technologies due to collaborations with GenAI
  *   Affordances and Possibilities: Humans collaborating with GenAI in predictable and unpredictable ways
  *   Humanistic Outcomes: Increases and decreases in the psychological well-being of humans through the workings of GenAI
  *   Ethics: Corrupting and purging the ethical views and practices of humans due to collaborations with GenAI (e.g., engaging in plagiarism, checking and spreading GenAI-generated misinformation and deep fakes)

Methods:
We welcome various research approaches, including, but not limited to:

  *   Conceptual/theoretical articles (also formal models and simulations)
  *   Qualitative studies (e.g., interviews and case studies)
  *   Quantitative studies (e.g., surveys, lab and field experiments, and trace data)
  *   Design science (e.g., GenAI artifacts implemented in collaboration with humans)
  *   Combinations of these approaches (i.e., multi- and mixed-methods)

Timeline:
All papers must be submitted by 15 October 2024 at the latest via the journal’s online submission system (http://www.editorialmanager.com/buis/). Please observe the instructions regarding the format and size of submission to BISE. Papers should adhere to the submission general BISE author guidelines (https://www.bise-journal.com/?page_id=18).
Submissions will be reviewed anonymously in a double-blind process by at least two referees with regard to relevance, originality, and research quality. In addition to the editors of the journal, distinguished international scholars will be involved in the review process.
Given the timeliness and importance of this topic, we aim to publish meaningful contributions after fast and limited decision cycles. The editorial timeline will proceed as follows:

  *   Deadline for Submission: 15 Oct 2024
  *   Notification of the Authors, 1st Round: 07 Jan 2025
  *   Completion Revision 1: 15 Mar 2025
  *   Notification of the Authors, 2nd Round: 15 May 2025
  *   Completion Revision 2: 15 Jun 2025
  *   Notification of the Authors, Final Round: 30 Jun 2025
  *   Online Publication: asap
  *   Print Publication: October 2025

Editors of the Special Issue:
Martin Adam,
University of Goettingen, Germany
martin.adam at uni-goettingen.de<mailto:martin.adam at uni-goettingen.de> (corresponding)
Kevin Bauer,
University of Mannheim, Germany
kevin.bauer at uni-mannheim.de<mailto:kevin.bauer at uni-mannheim.de>
Ekaterina Jussupow,
Darmstadt University of Technology, Germany
ekaterina.jussupow at tu-darmstadt.de<mailto:ekaterina.jussupow at tu-darmstadt.de>
Alexander Benlian,
Darmstadt University of Technology, Germany
benlian at ise.tu-darmstadt.de<mailto:benlian at ise.tu-darmstadt.de>
Mari-Klara Stein,
Tallinn University of Technology, Estonia
mari-klara.stein at taltech.ee<mailto:mari-klara.stein at taltech.ee>



Prof. Dr. Ekaterina Jussupow
Assistant Professor Information Systems

Technical University Darmstadt
Department of Law and Economics
Chair of Information Systems
S1|03 195
Hochschulstr. 1
64289 Darmstadt
Germany




More information about the AISWorld mailing list