[AISWorld] CFP for Secon d International Workshop on Multimedia Pragmatics (MMPrag'19)

William Grosky wgrosky at umich.edu
Wed Jan 16 13:49:39 EST 2019


CALL FOR PAPERS

SECOND INTERNATIONAL WORKSHOP ON MULTIMEDIA PRAGMATICS (MMPrag'19)
March 30, 2019 - San Jose, California
Co-Located with the IEEE SECOND INTERNATIONAL CONFERENCE ON MULTIMEDIA
INFORMATION PROCESSING AND RETRIEVAL (MIPR'19)
March 28-30, 2019 - San Jose, California

Venue: Crowne Plaza San Jose-Silicon Valley Hotel, 777 Bellew Drive,
Milpitas, California 95053, +1 (408) 321-9500

Submission Website: https://easychair.org/conferences/?conf=mmprag19
Call for Papers: https://easychair.org/cfp/MMPrag19
Workshop Website: http://mipr.sigappfr.org/19/

====================================IMPORTANT
DATES====================================================================

January 25, 2019 - Submissions due
February 1, 2019 - Acceptance notification
February 8, 2019 - Camera-ready papers and author registrations due
March 30, 2019   - Workshop date
====================================DESCRIPTION=========================================================================
Most multimedia objects are spatio-temporal simulacrums of the real world.
This supports our view that the next grand challenge
for our community will be understanding and formally modeling the flow of
life around us, over many modalities and scales. As
technology advances, the nature of these simulacrums will evolve as well,
becoming more detailed and revealing more information
concerning the nature of reality to us.

Currently, IoT is the state-of-the-art organizational approach to construct
complex representations of the flow of life around us.
Various, perhaps pervasive, sensors, working collectively, will broadcast
to us representations of real events in real-time. It
will be our task to continuously extract the semantics of these
representations and possibly react to them by injecting some
response actions into the mix to ensure some desired outcome.

In linguistics, pragmatics studies context and how it affects semantics.
Context is usually culturally, socially, and historically
based. For example, pragmatics would encompass a speaker’s intent, body
language, and penchant for sarcasm, as well as other signs,
often culturally based, such as the speaker’s type of clothing, which could
influence a statement’s meaning. Generic signal/sensor-
based retrieval should also use syntactical, semantic, and pragmatics-based
approaches. If we are to understand and model the flow
of life around us, this will be a necessity.

Our community has successfully developed various approaches to decode the
syntax and semantics of these artifacts. The development
of techniques that use contextual information is in its infancy, however.
With the expansion of the data horizon, through the
ever-increasing use of metadata, we can certainly leverage the semantic
representation of all media to a more robust level.

The NLP community has its own set of approaches in semantics and
pragmatics. Natural language is certainly an excellent exemplar of
multimedia, and the use of audio and text features has played a part in the
development of our field.

After a successful first workshop in Miami, we intend to continue this
tradition with the second workshop.

====================================KEYNOTES============================================================================
Keynote 1 -- Adam Pease, Principal Scientist, Infosys Foothill Research,
Palo Alto, California, USA

*Title: Conceptual Pragmatics: A Library of Logical Definitions

*Abstract: What is an apple, a jump or the number 2 and how can we hope to
have a computer understand these things with any of the same
depth or richness that people do?  We now have machine learning systems
that can mimic, at some level, human sensory subsystems,
recognizing objects in pictures, or voices and words in streams of audio.
But we also need a cognitive-level representation - one that
not only can recognize patterns but also hold information about those
patterns that allows for explanation and communication.  A person
can describe a previously unseen object to another person, who can then
recognize it and understand its characteristics before seeing
it, and before seeing it a million times.  Someone who has never seen a
child skip can still be told how to recognize skipping.  We can
tell another person the context of skipping, as an isolated action or the
likely context in which such actions occur.

In this talk I describe a unified corpus of logically-expressed and
computable meaning about concepts that has application in language
and image understanding.  It is a library of pragmatics that can be used to
express facts independently of whether they are learned over
many presentations of visual or auditory data, or related in communication.
I also describe its application in image recognition and
language understanding.

*Biography: Adam Pease is a Principal Scientist at the Infosys Foothill
Research Center in Palo Alto.  He has led research in ontology,
linguistics, and formal inference, including development of the Suggested
Upper Merged Ontology (SUMO), the Controlled English to Logic
Translation (CELT) system, and the Sigma knowledge engineering environment.
Sharing research under open licenses, in order to achieve
the widest possible dissemination and technology transfer, has been a core
element of his research program. He is the author of the
book “Ontology: A Practical Guide”.
------------------------------------
Keynote 2 -- Amit Sheth -- Professor and Executive Director of Kno.e.sis,
Wright State University, Dayton, Ohio, USA

*Title: On Exploiting Multimodal Information for Machine Intelligence and
Natural Interactions - With Examples from Health Chatbots

*Abstract: The Holy Grail of machine intelligence is the ability to mimic
the human brain. In computing, we have created silos in dealing
with each modality (text/language processing, speech processing,image
processing, video processing, etc.). However, the human brain’s
cognitive and perceptual capability to seamlessly consume (listen and see)
and communicate (writing/typing, voice, gesture) multimodal
(text, image, video, etc.) information challenges the machine intelligence
research. Emerging chatbots for demanding health applications
present the requirements for these capabilities. To support the
corresponding data analysis and reasoning needs, we have to explore a
pedagogical framework consisting of semantic computing, cognitive
computing, and perceptual computing (http://bit.ly/w-SCP). In particular,
we have been motivated by the brain’s amazing perceptive power that
abstracts massive amounts of multimodal data by filtering and processing
them into a few concepts (representable by a few bits) to act upon. From
the information processing perspective, this requires moving from
syntactic and semantic big data processing to actionable information that
can be weaved naturally into human activities and experience
(http://bit.ly/w-CHE).

Exploration of the above research agenda, including powerful use cases, is
afforded in a growing number of emerging technologies and their
applications - such as chatbots and robotics. In this talk, I will provide
these examples and share the early progress we have made towards
building health chatbots (http://bit.ly/H-Chatbot) that consume
contextually relevant multimodal data and support different
forms/modalities
of interactions to achieve various alternatives for digital health (
http://bit.ly/k-APH). I will also discuss the indispensable role of
domain knowledge and personalization using domain and personalized
knowledge graphs as part of various reasoning and learning techniques.

*Biography: Amit Sheth is an educator, researcher, and entrepreneur. He is
the LexisNexis Ohio Eminent Scholar, and IEEE Fellow, an AAAI
Fellow, and the executive director of Kno.e.sis - the Ohio Center of
Excellence in Knowledge-enabled Computing. Kno.e.sis. is a multi-
disciplinary Ohio Center of Excellence in BioHealth Innovation. Its faculty
and researchers are computer scientists, cognitive scientists,
biomedical researchers, and clinicians. Sheth is working towards a vision
of Computing for Human Experience enabled by the capabilities at
the intersection of AI (semantic, cognitive, and perceptual computing), Big
and Smart Data (exploiting multimodal Physical-Cyber-Social
data), and Augmented Personalized Health. His recent work has involved Web
3.0 technologies and involves enterprise, social sensor/IoT
data and applications.

====================================AREAS===============================================================================

Authors are invited to submit regular papers (6 pages), short papers (4
pages), demo papers (4 pages), and extended abstracts (1 page max
for a 5-minute presentation) at
https://easychair.org/conferences/?conf=mmprag19.

Cross-cultural contributions are encouraged. Topics of interest include,
but are not limited to:

- Affective computing
- Annotation techniques for natural language/images/videos/other
sensor-based modalities
- Applications to ecology, environmental science, health sciences, social
sciences
- Computational semiotics
- Deception detection
- Digital humanities
- Distributional semantics
- Education and Tutoring Systems
- Event modeling, recognition, and understanding
- Gesture modeling, recognition, and understanding
- Human-machine interaction
- Integration of multimodal features
- Machine learning for multimodal interaction
- Multimodal analysis of human behavior
- Multimodal data modeling, dataset development, sensor fusion
- Ontologies
- Semantic-based modeling and retrieval
- Storytelling
- Structured semantic embeddings
- Word, sentence, and feature embeddings - generation, semantic property
discovery, corpus dependencies,
  sensitivity analysis, retrieval aids

To be included in the IEEE Xplore Library, accepted papers must be
registered and presented.


====================================ORGANIZATION========================================================================

Chairs:
  R. Chbeir, University of Pau, FR (richard.chbeir at univ-pau.fr)
  W. Grosky, University of Michigan-Dearborn, US (wgrosky at umich.edu)

Program Committee:
  Wael Abd-Almageed, ISI, USA
  Mohamed Abouelenien, University of Michigan-Dearborn, USA
  Rajeev Agrawal, ITL, ERDC, USA
  Akiko Aizawa, National Institute of Informatics, Japan
  Yiannis Aloimonos, University of Maryland, USA
  Anya Belz, University of Brighton, UK
  Renaldo Bonacin, CTI, BrazilSecondSec
  Fabricio Olivetti de Franca, Federal University of ABC, Brazil
  Julia Hirschberg, Columbia University, USA
  David Hogg, University of Leeds, UK
  Ashutosh Jadhav, IBM, USA
  Clement Leung, Hong Kong Baptist University, China
  Debanjan Mahata, Bloomberg, USA
  David Martins, Federal University of ABC, Brazil
  Adam Pease, Articulate Software, USA
  James Pustejovsky, Brandeis University, USA
  Terry Ruas, University of Michigan-Dearborn, USA
  Victoria Rubin, University of Western Ontario, Canada
  Shin'ichi Satoh, National Institute of Informatics, Japan
  Amit Sheth, Wright State University, USA
  Peter Stanchev, Kettering University, USA
  Joe Tekli, American University of Lebanon, Lebanon


-- 
William Grosky
Professor
Department of Computer and Information Science
University of Michigan-Dearborn
4901 Evergreen Road
Dearborn, MI 48128
Email: wgrosky at umich.edu
Web: *http://umdearborn.edu/users/wgrosky
<http://umdearborn.edu/users/wgrosky>*
Office Phone: +1.313.583.6424
Department Phone: +1.313.436.9145



More information about the AISWorld mailing list