[AISWorld] CFP: HICSS 57 Mini-track - Conversational AI and Ethical Issues

danjongkim at gmail.com danjongkim at gmail.com
Fri Mar 3 01:10:39 EST 2023


We cordially invite you to submit your work to this mini-track

 

Track:              Collaboration Systems and Technologies 

Minitrack:       Conversational AI and Ethical Issues  

Submission Deadline: June 15, 2023

Description: 

Conversational AI (CA) is becoming an important means for human-computer
collaboration. CA uses massive datasets and Artificial Intelligence (AI)
techniques, such as Natural Language Processing (NLP), Machine Learning (ML)
and Deep Learning (DL), to mimic human interaction. By translating the
meanings of voice and text input, CA performs a wide range of functions,
from assisting users in searching web resources, summarizing and translating
text, to answering challenging questions based on knowledge obtained from
large volumes of data. Such CA not only increases the usability of
computer-based services but also brings a radical change to human-computer
collaboration. However, despite its exponentially increasing adoption across
industries, CA suffers from a variety of technical limitations, calling for
more scholarly attention to address them. Further, with its rapid adoption,
our society is facing numerous ethical issues which need to be addressed to
ensure that CA is safe, secure, trustworthy, and ethically appropriate in
various contexts. 

 

The minitrack on Conversational AI and Ethical Issues is organized in an
effort to draw attention to a wide variety of issues relevant to CA and to
encourage more intensive research on this emergent topic. This minitrack
will serve as an interactive forum for researchers to discuss the critical
issues of CA, possible means to realize the next generation of
human-computer collaboration and to contribute to the growing AI focus at
HICSS. This minitrack welcomes theoretical and empirical research addressing
a variety of technical, social, and ethical issues relevant to a complex and
multifaceted challenge of Conversational AI systems. The topics relevant to
this minitrack include, but not limited to: 

Generative AI: Generative AI has been used to create various types of new
content, including text, images, audio, video and synthetic data. The
release of powerful generative models, such as ChatGPT, has accelerated the
adoption of generative AI, calling for more research to address its various
issues, including design methods, development challenges, innovative use
cases, and ethical concerns.

Natural Language Processing and Text Analytics: A wide variety of ML/DL
methods along with NLP have been used to analyze voice and text in
conversation. Nonetheless, existing approaches suffer from technical
limitations, calling for more research to advance the state-of-the art in
voice/text analytics.

Transparency and Explainability: Conversational AI systems can be difficult
to understand or explain, making it harder for people to trust them. This
has led to a growing need for research on methods to increase the
transparency and explainability of AI systems (Samek et al., 2019). 

Bias and Fairness: Conversational AI systems can perpetuate and amplify
biases present in the data used to train them, which can lead to unfair and
discriminatory outcomes. This has led to a growing need for research on
methods to increase fairness and reduce bias in Conversational AI systems
(Daugherty, et al, 2019).  

Autonomy: As Conversational AI systems become more advanced, there are
concerns about their potential to make decisions without human oversight or
control. This has led to a growing need for research on methods to ensure
the safety and accountability of autonomous AI systems (Baum, 2020). 

Privacy and Data Breaches: The use of Conversational AI can raise concerns
about the collection, storage, and process of large amounts of personal
sensitive data that makes them a target for data breaches and other forms of
cybercrime (Osenl, et al., 2021). Methods and implications for protecting
individuals' privacy and data breaches as a result of misuse of AI systems
need to be studied.  

Security and Vulnerabilities: Conversational AI systems can be vulnerable to
adversarial attacks (e.g., hacking, malware, and other forms of cyber
attack), which can compromise their security and the security of the systems
and networks they are connected to. Attackers also manipulate input data or
use other techniques to trick the system into making incorrect decisions.
This has led to a growing concern about the confidentiality and integrity of
Conversational AI systems, which needs research on methods to increase the
robustness and security of Conversational AI systems against adversarial
attacks (Tariq, et al. 2020). 

Infringement of Copyrights and Intellectual Property Rights: Conversational
AI systems can be used to create and distribute unauthorized copies of
copyrighted and trademarked material, making it difficult to enforce and
protect such rights. (Craig, 2022). In addition, the models of
Conversational AI systems are trained on a large amount of data, are
valuable assets, and can be stolen or replicated (Oliynyk, et al., 2022). 

Weaponization: AI systems are increasingly being used in weapon systems,
which raises ethical questions about the use of Conversational AI in warfare
and the possibility of Conversational AI being used to create autonomous
weapons (Duberry, 2022). 

IMPORTANT DATES

  - June 15: Paper submissions deadline

  - August 17: Notification of decision

  - September 22: Deadline for authors to submit final manuscript for
publication

 

Conference Website:  http://hicss.hawaii.edu/ 

Author Guidelines:  http://hicss.hawaii.edu/tracks-and-minitracks/authors/

 

Dan J. Kim (Primary)

University of North Texas

dan.kim at unt.edu

 

Victoria Yoon

Virginia Commonwealth University

vyyoon at vcu.edu 

 

Kiseol Yang

University of North Texas

kiseol.yang at unt.edu

 

Manoj Thomas

University of Sydney

manoj.thomas at sydney.edu.au 




More information about the AISWorld mailing list