[AISWorld] SI on Fake News and Deception - AIS Transactions HCI

Dov Te'Eni teeni at tauex.tau.ac.il
Wed Jan 20 11:58:32 EST 2021


AIS Transactions on Human-Computer Interaction Special Issue on Fake News and Deception Due Date: February 21, 2021 (extended)
 
Fake news and deception have become part of everyday communication on social media, and beyond. In an era of information abundance accessible primarily on the Internet, people are confused not only because of the vast quantity of noisy information supplied that places burdening demands on limited attention, but also because of the difficulty to discern true from misleading and false information. Notably, fake news has become a serious menace in recent years, threatening political processes and institutions, law and order, economic, social and health infrastructures, and, literally, life and death. Undoubtedly, social media (Facebook, Twitter, Snapchat, WhatsApp and other Internet-based media) that enable the easy generation and the widespread dissemination of false information have accelerated and magnified the grave effects of fake news and deception, and have, justly, drawn the attention of governments and people worldwide.
 
Fake news can be generated and disseminated deliberately or unintentionally to deceive or mislead online users. Hernon (1995) calls these disinformation and misinformation, respectively. People creating and spreading disinformation to drive advertising, sell products, promote an ideology, and influence public attitudes are deceiving intentionally (Subramanian, 2017; DiResta et al., 2019). People receiving fake news from friends without knowing it is false and forwarding it to other friends are not deceiving their friends intentionally. Others may seek support to detect fake news information before processing or forwarding it (Torres et al., 2018). Thus, it will be important to consider both the falsehood of the information and the intention or motivation of communicating it to others.  The nature and effectiveness of online approaches that recipients of fake and misleading information can employ to correct and improve the digital literacy of their peers, whether colleagues, friends, and family, are still open to debate.  

It is not however only news that is fake in today’s everyday communication, whether in communication that is work related, personal, or social. Misinformation, disinformation, and deception seem to be growing in social media, and more generally in online communication. These phenomena can be found nowadays in many areas of activities in organizations and markets, e.g., scamming and manipulation affecting online purchases or online reviews, at school, e.g., cheating in online tests, in medical care, e.g., information dissemination in relation to vaccines, health products, and drug testing, and in social activities, e.g., online dating, bullying, and sexual harassment.  In sum, online deception manifests itself in various forms and genres, such as clickbait, hyper-partisan reporting, propaganda, astroturfing, scamming, ad-fraud, fake online personas, fabricated online reviews, inauthentic coordinated and automated activities, among others.  The uncertainty and upheaval caused by the SARS-CoV-2 pandemic have provided an opportunity for the intensification of misinformation and deception of these online activities that have heightened their impacts on political polarization, public health guidelines compliance, and economic activity.  In these troubled times, researchers in information systems have much to contribute to a better understanding of these various forms of misinformation and deception are enabled and boosted by advances in IT.

Undoubtedly, IT, particularly social media, and its widespread adoption play a crucial role in understanding the phenomena of fake news and deception but we wish to concentrate on the interaction between IT and user behavior.  This socio-technical interaction is at the core of several aspects of the fake news and deception phenomenon.  For instance, it is still unclear what and when technological interventions, such as primes, fact-checking, trust mechanisms, debriefings, and disclaimers, are effective to mitigate the impact of fake news and deception on users (Figl et al., 2019; Moravec et al., 2019; Ross et al., 2018).  In an age where text, images, and videos can be manipulated and artificially synthesized via deep learning and generative adversarial networks (e.g., McGuffie & Newhouse, 2020; Nimmo et al., 2019), it becomes critical to understand how users react (and fail to react) to these emerging and threatening forms of deception, and what education, research, and policy interventions are needed to counter these tactics.  There is also much to learn about how online communities moderate online misinformation and deception, via technology, rules, and norms.  The germination and spread of conspiracy theories (e.g., Johnson et al., 2020; Samory & Mitra, 2018) show that online communities, long seen in a positive light for their ability to generate social support, knowledge, and belonging, have a dark underbelly.  Finally, from a supply perspective, the tactics employed by manipulators and deceivers to generate and amplify misleading yet convincing information via online window-dressing, algorithmic gaming, automation, and social proofing (e.g., Salge & Karahanna, 2018) are also in need of academic attention. 
  
This THCI special issue (SI) examines the cycle of interactions between IT and users in which these forms of fake news and deception are generated and disseminated through online channels. Specifically, recipients of false information may, knowingly or unknowingly, process, react, and forward the false information, and at the same time, users as well as platforms attempt to detect, moderate, and react to false information and inauthentic activity according to norms, regulations, and personal judgements. The cycle continues when the dire effects of false information become public and draw further remedies, such as changes to trust mechanisms, moderation policies and regulations, computational affordances and algorithms to help detect and react but, unfortunately, also harden manipulators’ resolve at identifying novel ways to exploit technology loopholes and propagate false information. The cyclical effects lead to further confusion and lower trust, to possibly yet more false information on the Internet, and to probably more effects on human behavior and society that are yet to be determined. While recent research has addressed these different steps from several perspectives, our SI seeks a behavioral (possibly also organizational and social) perspective, not a purely technological perspective. 
 
We therefore call for papers addressing compelling issues around the fake news and deception phenomena related to the behavior of online users and information consumers on the Internet. We seek a wide range of research in topics, theory, perspectives, and levels of analysis affected.  We welcome a diversity of methods: qualitative, quantitative, experimental, archival, and design science.  We encourage pure HCI-related IS research as well as inter-disciplinary research with partners from journalism, communication, psychology, sociology, political science, and other disciplines.
 
Topics
---------
Topics and themes for the SI include, but are not limited to:
●	Fake news, social bots, misinformation, and disinformation related effects on online user behavior
●	User attitudes and behaviors about fake news and deception, and the effectiveness of user, organizational, and technological remedials (e.g. training, peer influence, computational nudges, and primes)
●	Online misinformation diffusion: its dynamics and its impacts on public attitudes, polarization, reputation, and trust
●	Online misbehavior (scams, deception, and click-bait) and its relation to misinformation
●	Impacts of fake news and deception on users, groups, companies, and/or societies
●	User perceptions of credibility and reputation of news sources, social data, and crowdsourced data
●	User perceptions of fairness, accountability, transparency, and ethics in fake news or misinformation detection and content moderation
●	Platform governance as it affects user behavior
●	Design of algorithms, social bots, curation systems, recommendation systems from a behavioral perspective
●	Technical, behavioral, economic, and regulatory/policy solutions to affect user behavior
●	Interface design for misinformation monitoring, detection, and mitigation with real-time, large-scale, and streaming systems
●	Innovative crowdsourcing and collaborative approaches to counter fake news and deception
●	Online communities: dynamics of spread and moderation of conspiratorial information and fake news
●	Fake reviewers (and consumer reviews) and effect on decision making and judgment
●	Organized disinformation operations online: tactics, impacts, prevention and mitigation
●	Methodological innovations for the study of fake news and online deception from an HCI perspective
●	New theories on human-computer interaction around fake news and deception

Timeline and Submission
------------------------------------
Authors are welcome to email an abstract or extended abstract (up to two pages) to the Guest Editors prior to submission if they have questions about their paper’s fit with the special section or if they are concerned with meeting the deadlines.  Papers will be published online in AIS THCI on a rolling basis, as they become accepted after the developmental peer review process. An editorial will frame, package and promote the collection of papers as the special issue.  Reviewing will be double-blind. You are asked to submit a blinded paper without identifying author information and a non-blind cover page with author information, acknowledgements, and an indication that the paper is intended for the Special Issue on Fake News and Deception. Authors should follow the formatting and length requirements indicated on the AIS THCI website at: https://aisel.aisnet.org/thci/authorinfo.html 

To submit a manuscript: 
1) Read the "Information for Authors" "THCI Policy" pages.
2) Go to http://mc.manuscriptcentral.com/thci
3) Please type: "Fake News and Deception" when presented with the statement: "If this is a submission to a special issue, please enter its name here."

Deadline for full paper submission: 28 February 2021 Notification to authors: no later than 15 May 2021 Latest resubmission: 31 August 2021

If authors submit their full papers earlier than January 2021, their papers will be sent out for peer review at that time, and accordingly, authors can receive an earlier notification. 

Guest Editors
-------------------
Dov Te'eni, Tel-Aviv University, Israel teeni at tauex.tau.ac.il Dov Te'eni is Professor of IS at the School of Business, Tel Aviv University. Dov currently studies visualization and feedback, combining human and machine intelligence, and technologies for communication and knowledge sharing. Dov has co-authored (with Ping Zhang and Jane Carey) a book published by Wiley – Human-computer interaction for developing effective organizational systems, and co-edited (with David Schwartz) the Encyclopedia of Knowledge Management, as well as other books on information systems and innovation. He is the Past President of AIS – the international Association of Information Systems, has served as Senior Editor for MIS Quarterly, AIS Transactions of HCI, and Editor of the European Journal of IS (EJIS). Dov was awarded the AIS Fellowship (2008) and LEO award (2015).
 
Shuk Ying (Susanna) Ho, The Australian National University, susanna.ho at anu.edu.au Susanna Ho is a Professor of Information Systems. Her doctoral dissertation examined how web personalization influences the behavior of online users and her current research portfolio reflects a continuing interest in this area. Her research focuses on human computer interaction, web user behavior, interface design, judgment and decision making, big data, and data analytics. Susanna's research has been published in a number of leading academic journals including MIS Quarterly, Information Systems Research, Journal of the Association for Information Systems, Information Systems Journal, European Journal of Operational Research, Information and Management, Decision Support Systems, and Journal of Business Ethics. Susanna is a Senior Editor for AIS Transactions of HCI, an Associate Editor for MIS Quarterly, and on the editorial board for JAIS and CAIS. 

Jean-Grégoire Bernard, Victoria University of Wellington, New Zealand jean-gregoire.bernard at vuw.ac.nz Jean-Grégoire Bernard is a Senior Lecturer at the Victoria School of Business and Government at Victoria University of Wellington in New Zealand. His research focuses on issues pertaining to digital innovation and online communities.  His work has been published at the Communications of the Association for Information Systems, the International Conference on Information Systems, the AoM Meetings, and Systèmes d’Information & Management. He currently serves as Associate Editor of the Communications of the Association for Information Systems and has been Associate Editor for multiple tracks at the International Conference for Information Systems from 2012 to 2020, service for which he was awarded the 2nd runner-up outstanding AE award in 2018. He has reviewed for several academic journals, including the JAIS, Journal of Management Information Systems, DATABASE, Journal of Strategic Information Systems, Information & Management, and PLoS ONE. 
 
References
1.	DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., Matney, R., Fox, R., Albright, J., & Johnson, B. The Tactics & Tropes of the Internet Research Agency. New Knowledge Report, US Senate Documents, 2019.
2.	Figl, K., Kießling S., Rank, C., & Vakulenko, S. Fake News Flags, Cognitive Dissonance, and the Believability of Social Media Posts. Proceedings of the International Conference on Information Systems, 2019.
3.	Hernon, P. Disinformation and Misinformation through the Internet: Findings of an Exploratory Study. Government Information Quarterly, 12, 2 (1995), 133-139.
4.	Johnson, N. F., Velásquez, N., Restrepo, N. J., Leahy, R., Gabriel, N., El Oud, S., Zheng, M., Manrique, P., Wuchty, S., & Lupu, Y. The Online Competition Between Pro-and Anti-Vaccination Views. Nature, 582 (May 2020), 230-233.
5.	Moravec, P.L., Minas, R.K., & Dennis, A. Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All. MIS Quarterly 43, 4 (2019), 1343-1360.
6.	McGuffie, K., & Newhouse, A. The Radicalization Risks of GPT-3 and Advanced Neural Language Models. Middlebury Institute of International Studies at Monterey: Center for Terrorism, Extremism, and Counterterrorism, 2020.
7.	Nimmo, B., Eib, S., Tamora, L., Johnson, K., Smith, I., Buziashvili, E., Karan, K., Ponce de León Rosas, E., Rizzuto, M., François, C., Robertson, I. #OperationFFS: Fake Face Swarm. Graphika & DFRLab Joint Report, 2019. https://graphika.com/reports/operationffs-fake-face-swarm/ 
8.	Ross, Björn, Anna Jung, Jennifer Heisel, and Stefan Stieglitz. Fake News on Social Media: The (In)Effectiveness of Warning Messages. Proceedings of the International Conference on Information Systems, 2018.
9.	Salge, C. A. D. L., & Karahanna, E. Protesting Corruption on Twitter: Is it a Bot or is it a Person?  Academy of Management Discoveries, 4, 1 (2018), 32-49.
10.	Samory, M., & Mitra, T. ‘The Government Spies Using Our Webcams’ The Language of Conspiracy Theories in Online Discussions. Proceedings of the ACM on Human-Computer Interaction 2, no. CSCW (2018). 1-24.
11.	Subramanian, S. Inside the Macedonian fake-news complex. Wired, 2017. https://www.wired.com/2017/02/veles-macedonia-fake-news/
12.	Torres, R., Gerhart, N., & Negahban, A. Epistemology in the Era of Fake News: An Exploration of Information Verification Behaviors Among Social Networking Site Users. ACM SIGMIS Database: The DATABASE for Advances in Information Systems 49, 3, (2018), 78-97.


More information about the AISWorld mailing list