[AISWorld] CFP [Deadline Extension]: Special Issue on AI Fairness, Trust and Ethics
Lionel Robert
lprobert at umich.edu
Wed Feb 5 06:20:28 EST 2020
*AIS Transactions on HCI (THCI) <https://aisel.aisnet.org/thci/>*
*Special Issue on AI Fairness, Trust and Ethics*
*Special Issue Editors:*
Lionel P. Robert Jr., University of Michigan
Gaurav Bansal, University of Wisconsin-Green Bay
Nigel Melville, University of Michigan
Tom Stafford, Louisiana Tech University
*Submission Deadline: Full papers due March 31, 2020*
AI is rapidly changing every aspect of our society from how we conduct
business, socialize and exercise. AI has amplified our productivity as well
as biases. John Giannandrea, who leads AI at Google, recently lamented in
the MIT Technology Review that the dangers posed by the ability of AI
systems to learn human prejudices were far greater than those posed by
killer-robots. This phenomenon is problematic because AI systems are making
millions of decisions every minute, many of which are invisible to the
users and incomprehensible to the designers. Their opaqueness is a
significant cause of worry and leaves many unanswered questions.
Fairness, Trust and Ethics are at the core of many of the issues underlying
the implications of AI. Fairness is undermined when managers rely blindly
on “objective” AI outputs to “augment” or replace their decision making.
Managers often ignore the limitations of their assumptions and the
relevance of the data that was used to train and test AI models, resulting
in bias decisions that are hard to detect or appeal. Trust is undercut,
when AI is used to render false or misleading images of individuals saying
or doing things that are simply not true. These false images are making it
difficult for society to trust what they see or hear. Ethical challenges
are presented when decisions used by AI lead to further inequalities in the
society. Examples include: displaced workers and shortages of affordable
housing due to rental apartments and housing units being diverted to higher
paying Airbnb short term vacationers.
Despite the potential transformative effects, research on AI in the
Information Systems field is still scarce, and as a result, our knowledge
on the impacts of AI are still far from conclusive. Yet, it is very
important from the business and technical perspective that we research and
examine issues of fairness, trust and ethics with AI. This examination is
critical as issues of fairness, trust and ethics lie at the heart of
addressing the new challenges facing the development and use of AI
throughout our society. This is especially true, as there has been a rapid
increase in the number of applications of AI in an ever increasing number
of new areas. In all, AI has the potential to disrupt and dramatically
change the interactions between humans and technologies.
This Special Issue on AI Fairness, Trust and Ethics calls for research that
can unpack the potential, challenges, impacts, and theoretical implications
of AI. We welcome research from different perspectives regardless of the
approach or methodology. Submissions with novel theoretical implications
that span disciplines are strongly encouraged. We seek submissions that can
improve our understanding about the impacts of AI in organizations and our
broader society.
*Potential topics include (but are not limited to):*
- Defining fair, ethical and trustworthy AI
- Antecedents and consequences for fair, ethical and trustworthy AI
- Designing, implementing and deploying fair, ethical and trustworthy AI
- Theories of fair, ethical and trustworthy AI
- Policy and governance for fair, ethical and trustworthy AI
- Appropriate and inappropriate applications of AI
- Legal responsibilities for decisions made by AI
- AI biases
- AI algorithm transparency – how to improve
- The dark side of AI
- AI equality vs AI equity
- Implications of unfair, unethical and untrustworthy AI
*Key Dates:*
Optional one page abstract submissions: Oct 1, 2019
Selected abstracts invited for poster presentations at Pre-ICIS 2019 SIGHCI
workshop on Dec 15, 2019
First round submissions: Feb 20, 2020 *New Deadline: Mar 31, 2020*
First round decisions: April 15, 2020
Second round submissions: July 15, 2020
Second round decisions to authors: Sep 15, 2020
Third and final round submissions: November 1, 2020
Final decisions to authors: November 15, 2020
Targeted publication date: December 31, 2020
*To submit a manuscript: *
1) Read the "Information for Authors" "THCI Policy" pages.
2) Then go to http://mc.manuscriptcentral.com/thci.
3) Then please type: "*AI Fairness, Trust and Ethics*" when presented with
the statement: "*If this is a submission to a special issue, please enter
its name here*."
*Contact:*
All questions about submissions should be emailed to:
AIS-THCI-AI-FTE-SI-requests at umich.edu.
*Full CFP available here <https://bit.ly/2JcrDT7>*
Best regards,
Lionel
*New Paper(s):*
*Robert, L. P.*, Alahmad, R., Zhang, Q., Kim, S., Esterwood, C., and You,
S. (2020). *A Review of Personality in Human Robot Interactions*. *Foundations
& Trends in Information Systems, *(pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/153526/Robert%20et%20al%20Submitted%20Version%20Jan%2023%202020%20.pdf?sequence=1&isAllowed=y>),
forthcoming,
author's copy: http://hdl.handle.net/2027.42/153526.
Du, N., Zhou, F., Pulver, E., Tilbury, D., *Robert, L. P.*, Pradhan, A. and
Yang, X. J. (2020). *Examining the Effects of Emotional Valence and Arousal
on Takeover Performance in Conditionally Automated Driving*, *Transportation
Research Part C: Emerging Technologies*, (pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/152470/Du%20et%20al.%202020%20%28PrePrint%29.pdf?sequence=1&isAllowed=y>),
112, pp. 78-87, *open access link*: DOI:
https://doi.org/10.1016/j.trc.2020.01.006, author's copy:
http://hdl.handle.net/2027.42/152470 and http://arxiv.org/abs/2001.04509.
Jayaraman, S.K., Chandler, C., Tilbury, D.M., Yang, X.J., Pradhan, A.K.,
Tsui, K.M. and *Robert, L.P.* (2019). *Pedestrian Trust in Automated
Vehicles: Role of Traffic Signal and AV Driving Behavior*, *Frontiers in
Robotics and AI*, (pdf
<https://deepblue.lib.umich.edu/bitstream/handle/2027.42/151794/TRI_Frontiers_in_Robotics_and_AI_finalPublicCopyOct%2025%202019.pdf?sequence=1&isAllowed=y>),
6(117), *open access link*: DOI:10.3389/frobt.2019.00117
<https://www.frontiersin.org/articles/10.3389/frobt.2019.00117/abstract>,
author's
copy: http://hdl.handle.net/2027.42/151794.
Lionel P. Robert Jr.
Associate Professor, School of Information
<https://www.si.umich.edu/people/lionel-robert>
Core Faculty, Michigan Robotics Institute
<https://robotics.umich.edu/core-faculty/>
Affiliate Faculty, National Center for Institutional Diversity
<https://lsa.umich.edu/ncid>
Affiliate Faculty, Michigan Interactive and Social Computing
<http://misc.si.umich.edu/>
Affiliate Faculty, Center for Hybrid Intelligence Systems
<https://hyints.engin.umich.edu/>
Affiliate Faculty, IU Center for Computer-Mediated Communication
<https://ccmc.ils.indiana.edu/>
Director of MAVRIC <https://mavric.si.umich.edu>
Co-Director of DOW Lab
University of Michigan
Email: lprobert at umich.edu
UMSI Website <https://www.si.umich.edu/directory/lionel-robert> | Personal
Website <https://sites.google.com/a/umich.edu/lionelrobert/home>
MAVRIC: https://mavric.si.umich.edu
More information about the AISWorld
mailing list