[AISWorld] CFP JMIS Special Issue on Systems Designed to Detect Deception, Fraud, Malicious Intent and Insider Threat

Giboney, Justin S jgiboney at albany.edu
Fri Jul 3 10:24:25 EDT 2015


Call for papers Journal of Management Information Systems special issue on Systems Designed to Detect Deception, Fraud, Malicious Intent and Insider Threat


== Special Issue Co-Editors ==

Jay F. Nunamaker Jr., University of Arizona

Judee K. Burgoon, University of Arizona


Security organizations are increasingly using technology to enhance their screening of human behavior and communication. Screening technology investigates human communication for signs of deceit, malicious intent, fraud, or other threats. However, many of these technologies are still in their nascent stage and need further development and theoretical understanding.


This special issue will increase theoretical, design and process knowledge regarding systems intended to detect deception, fraud, malicious intent and insider threat. As research about screening has a large number of contexts, the scope of the special issue will also contain many contexts. Although the area of research includes many foundational disciplines (e.g., communication and computer science), priority will be given to manuscripts discussing the involvement or development of systems used for screening purposes.


The special issue will cover all topics related to systems designed to detect deception, fraud, malicious intent and insider threat. The following is a non-exhaustive list of sample research topics:


- Deception detection systems

- Credibility assessment systems

- User acceptance of credibility assessment systems

- Trust evaluation systems

- Deception sensor systems (is this the right term?)

- Fraud detection systems

- Disclosure of sensitive information to systems

- Automated human identification

- Automated intent identification

- Deception detection training systems

- Interviewing system techniques

- Surveillance systems

- Automated screening processes

- Human risk assessment systems

- Automated insider threat detection

- Human-computer interfaces for deception detection


This special issue welcomes research regarding systems designed to detect deception, fraud, malicious intent and insider threat. However, we cannot accommodate studies or topics that are largely computer-computer interaction, human-human interaction or archival analysis. Preference will be given to topics related to systems and their performance over general deception/fraud detection.


== Submission Guidelines ==

The special issue will contain about six to seven papers depending on a strict page limit. Submitted manuscripts should make a significant and novel contribution to the topic. Contributions can be theoretical or design oriented. However, all contributions should be supported by strong evidence. All research paradigms and methodologies are welcome. Interdisciplinary collaboration is encouraged. Strong preference will be given to papers that contribute novel, distinctive and non-trivial insights to the topic, rather than simple extensions or replications.


Abstract submissions should be emailed to the editorial coordinator, Justin Scott Giboney (jgiboney at albany.edu). We will verify that abstracts are relevant to the special issue. Only papers with approved abstracts will be considered for submission to the special issue.


== Important Dates ==

- Abstract submissions: September 1, 2015 (or earlier)

- Full paper due: October 1, 2015

- First round reviews provided to authors: January 1, 2016

- Paper revisions due: March 15, 2016

- Final decision on acceptance of papers: May 15, 2016


== Questions ==

Please send questions to the editorial coordinator, Justin Scott Giboney (jgiboney at albany.edu).


== Non-exhaustive Example Literature ==

Biros, D. P., Daly, M., & Gunsch, G. (2004). The influence of task load and automation trust on deception detection. Group Decision and Negotiation, 13(2), 173-189.


Cao, J., Crews, J. M., Lin, M., Burgoon, J., & Nunamaker, J. F. (2003). Designing Agent99 trainer: A learner-centered, web-based training system for deception detection. In Intelligence and Security Informatics (pp. 358-365). Springer Berlin Heidelberg.


Derrick, D. C., Elkins, A. C., Burgoon, J. K., Nunamaker Jr, J. F., & Zeng, D. D. (2010). Border security credibility assessments via heterogeneous sensor fusion. IEEE Intelligent Systems, (3), 41-49.


Elkins, A. C., Dunbar, N. E., Adame, B., & Nunamaker, J. F. (2013). Are users threatened by credibility assessment systems?. Journal of Management Information Systems, 29(4), 249-262.


Meservy, T. O., Jensen, M. L., Kruse, J., Burgoon, J. K., Nunamaker Jr, J. F., Twitchell, D. P., Tsechpenakis, G., & Metaxas, D. N. (2005). Deception detection through automatic, unobtrusive analysis of nonverbal behavior. Intelligent Systems, IEEE, 20(5), 36-43.


Nunamaker, J. F., DErrICk, D. C., Elkins, A. C., Burgoon, J. K., & Patton, M. W. (2011). Embodied conversational agent-based kiosk for automated interviewing. Journal of Management Information Systems, 28(1), 17-48.


Twyman, N. W., Elkins, A. C., Burgoon, J. K., & Nunamaker, J. F. (2014). A rigidity detection system for automated credibility assessment. Journal of Management Information Systems, 31(1), 173-202.


Zhou, L., Burgoon, J. K., Twitchell, D. P., Qin, T., & Nunamaker Jr, J. F. (2004). A comparison of classification methods for predicting deception in computer-mediated communication. Journal of Management Information Systems, 20(4), 139-166.


Zhou, L., & Zhang, D. (2007). Typing or messaging? Modality effect on deception detection in computer-mediated communication. Decision Support Systems, 44(1), 188-201.


Zhou, L., & Zhang, D. (2008). Following linguistic footprints: Automatic deception detection in online communication. Communications of the ACM, 51(9), 119-122.



More information about the AISWorld mailing list