[AISWorld] HICSS-58 CFP: Special Track on AI and Digital Discrimination
Kuruzovich, Jason Nicholas
KURUZJ at rpi.edu
Sun May 5 20:30:59 EDT 2024
HICSS-58 CFP: Special Track on AI and Digital Discrimination
HICSS-58 will present a special Track on AI and Digital Discrimination, January 7-10, 2025, Big Island, Hawaii, USA. Submissions are due on June 15th, 2024.
AI and Digital Discrimination
This minitrack attracts and presents research on understanding and addressing the discrimination problems arising in the design, deployment and use of artificial intelligent systems.
Digital discrimination refers to discrimination between individuals or social groups due to lack of access to Internet-based resources or in relation to biased practices in data mining and inherited prejudices in a decision-making context. A technology is biased if it unfairly or systematically discriminates against certain individuals by denying them an opportunity or assigning them a different and undesirable outcome. As we delegate more and more decision-making tasks to computer autonomous systems and algorithms, such as using artificial intelligence for employee hiring and loan approval, digital discrimination is becoming a serious problem.
Artificial Intelligence (AI) decision making can cause discriminatory harm to many vulnerable groups. In a decision-making context, digital discrimination can emerge from inherited prejudices of prior decision makers, designers, engineers or reflect widespread societal biases. One approach to address digital discrimination is to increase transparency of AI systems. However, we need to be mindful of the user populations that transparency is being implemented for. In this regard, research has called for collaborations with disadvantaged groups whose viewpoints may lead to new insights into fairness and discrimination.
Potential ethical concerns also rise in the use of AI that builds on Large Language Models (LLM) such as ChatGPT, the virtual AI chatbot that debuted in November 2022 by the startup OpenAI and reached 100 million monthly active users just two months after its launch. Professor Christian Terwiesch at Wharton found that ChatGPT would pass a final exam in a typical Wharton MBA core curriculum class, which sparked a national conversation about ethical implications of using AI in education. While some educators and academics have sounded the alarm over the potential abuse of ChatGPT for cheating and plagiarism, industry practitioners from legal industry to travel industry are experimenting with ChatGPT and debating on the impact of the AI on the business and future of the work. In essence, a Large Language Model is a deep learning algorithm that trains on large volumes of text. The bias in the data can lead to emerging instances of digital discrimination especially as various LLM based models, e.g., DALL-E, MAKE-A-VIDEO, are trained on data from different modalities (e.g. images, videos, etc.). Furthermore, the lack of oversight and regulations can also prove to be problematic. Given the rapid developments and penetration of AI chatbots, it is important for us to investigate the boundaries between ethical and unethical use of AI, as well as potential digital discrimination in the use of LLM applications.
Addressing the problem of digital discrimination in AI requires a cross-disciplinary effort. For example, researchers have outlined social, legal, and ethical perspectives of digital discrimination in AI. In particular, prior research has called for our attention to research the three key aspects: how discrimination arises in AI systems; how design in AI systems can mitigate such discrimination; and whether our existing laws are adequate to address discrimination in AI.
This minitrack welcomes papers in all formats, including empirical studies, design research, theoretical framework, case studies, etc. from scholars across disciplines, such as information systems, computer science, library science, sociology, law, etc. Potential topics include, but are not limited to:
AI-based Assistants: Opportunities and Threats
AI Explainability and Digital Discrimination
AI Literacy of users
AI Systems Design and Digital Discrimination
AI Use Experience of Disadvantaged / Marginalized Groups
Biases in AI Development and Use
Digital Discrimination in Online Marketplaces
Digital Discrimination and the Sharing Economy
Digital Discrimination with Various AI Systems (LLM based AI, AI assistants, etc.)
Effects of Digital Discrimination in AI Contexts
Ethical Use/ Challenges/ Considerations and Applications of AI systems
Generative AI (e.g., ChatGPT) Use and Ethical Implications
Organizational Perspective of Digital Discrimination
Responsible AI Practices to Minimize Digital Discrimination
Responsible AI Use Guideline and Policy
Societal Values and Needs in AI Development and Use
Sensitive Data and AI Algorithms
Social Perspective of Digital Discrimination
Trusted AI Applications and Digital Discrimination
User Experience and Digital Discrimination
Minitrack Co-Chairs:
Sara Moussawi (Primary Contact)
Carnegie Mellon University
smoussaw at andrew.cmu.edu
Jason Kuruzovich
Rensselaer Polytechnic Institute
kuruzj at rpi.edu
Minoo Modaresnezhad
University of North Carolina Wilmington
modaresm at uncw.edu
Xuefei Nancy Deng
California State University, Dominguez Hills
ndeng at csudh.edu
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 1638 bytes
Desc: not available
URL: <http://lists.aisnet.org/pipermail/aisworld_lists.aisnet.org/attachments/20240506/c05f9a21/attachment.p7s>
More information about the AISWorld
mailing list