[AISWorld] Newly published papers of JCSE (Jun. 2022)

JCSE office at kiise.org
Tue Aug 16 02:23:20 EDT 2022


Dear Colleague:

 

We are pleased to announce the release of a new issue of Journal of
Computing Science and Engineering (JCSE), published by the Korean Institute
of Information Scientists and Engineers (KIISE). KIISE is the largest
organization for computer scientists in Korea with over 4,000 active
members. 

 

Journal of Computing Science and Engineering (JCSE) is a peer-reviewed
quarterly journal that publishes high-quality papers on all aspects of
computing science and engineering. JCSE aims to foster communication
between academia and industry within the rapidly evolving field of
Computing Science and Engineering. The journal is intended to promote
problem-oriented research that fuses academic and industrial expertise. The
journal focuses on emerging computer and information technologies
including, but not limited to, embedded computing, ubiquitous computing,
convergence computing, green computing, smart and intelligent computing,
and human computing. JCSE publishes original research contributions,
surveys, and experimental studies with scientific advances.

 

Please take a look at our new issue posted at http://jcse.kiise.org
<http://jcse.kiise.org/> . All the papers can be downloaded from the Web
page.

 

The contents of the latest issue of Journal of Computing Science and
Engineering (JCSE)

Official Publication of the Korean Institute of Information Scientists and
Engineers

Volume 16, Number 2, June 2022

 

pISSN: 1976-4677

eISSN: 2093-8020

 

* JCSE web page: http://jcse.kiise.org

* e-submission: http://mc.manuscriptcentral.com/jcse

 

Editor in Chief: Insup Lee (University of Pennsylvania)

Il-Yeol Song (Drexel University) 

Jong C. Park (KAIST)

Taewhan Kim (Seoul National University)

 

 

JCSE, vol. 16, no. 2, June 2022

 

[Paper One]

- Title: Semantic Vector Learning and Visualization with Semantic Cluster
Using Transformers in Natural Language Understanding

- Authors: Sangkeun Jung

- Keyword: Semantic vector; Semantic vector learning; Natural language
understanding; Transformer; Clusteraware; Visualization

 

- Abstract

Natural language understanding (NLU) is a fundamental technology for
implementing natural interfaces. The embedding of sentences and
correspondence between text and its extracted semantic knowledge, called
semantic frame, has recently shown that a semantic vector representation is
key in the implementation or support of robust NLU systems. Herein, we
propose an extension of cluster-aware modeling with various types of pre-
trained transformers for consideration of the many-to-1 relationships of
text-to-semantic frames and semantic clusters. To attain this, we define
the semantic cluster, and design the relationships between cluster members
to learn semantically meaningful vector representations. In addition, we
introduce novel ensemble methods to improve the semantic vector
applications around NLU, i.e., similaritybased intent classification and a
semantic search. Furthermore, novel semantic vector and corpus
visualization techniques are presented. Using the proposed framework, we
demonstrate that the proposed model can learn meaningful semantic vector
representations in ATIS, SNIPS, SimM, and Weather datasets.

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.63-78
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=403&page_url=Cur
rent_Issues> 

 

[Paper Two]

- Title: Improving Speed of MUX-FSM-based Stochastic Computing for On-
device Neural Networks

- Authors: Jongsung Kang and Taewhan Kim

- Keyword: Neural processing unit; Hardware; Stochastic computing; Embedded
systems

 

- Abstract

We propose an acceleration technique for processing multiplication
operations using stochastic computing (SC) in ondevice neural networks.
Recently, multiplexor driven finite state machine (MUX-FSM)-based SCs,
which employ a MUX controlled by an FSM to generate a (repeated but short)
bit sequence of a binary number to count up for a multiplication operation,
considerably reduce the processing time of MAC operations over the
traditional stochastic number generator (SNG) based SC. Nevertheless, the
existing MUX-FSM-based SCs still do not meet the multiplication processing
time required for the wide adoption of on-device neural networks in
practice even though it offers a very economical hardware implementation.
In this respect, this work proposes a solution that speeds up the
conventional MUX-FSMbased SCs. Precisely, we analyze the bit counting
pattern produced by MUX-FSM and replace the counting redundancy by shift
operation, resulting in a shortening of the length of the required bit
sequence significantly, together with analytically formulating the number
of computation cycles. Through experiments, we have shown that the enhanced
SC technique can reduce the processing time by 44.1% on average over the
conventional MUX-FSM-based SCs.

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.79-87
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=404&page_url=Cur
rent_Issues> 

 

[Paper Three]

- Title: Application of Speech Recognition Interaction and Internet of
Things in Data Mining

- Authors: Kan Wang

- Keyword: Speech recognition interaction; Internet of Things technology;
Data mining; Speech recognition

 

- Abstract

The current data mining technology cannot attain the database of voice
retrieval, and the data mining process has a high risk of interference.
Therefore, the application of speech recognition interaction and Internet
of Things (IoT) technology in data mining has been investigated. Using a
speech recognition engine to recognize a user?™s intention, a database
retrieval model based on speech recognition interaction has been
constructed. To enhance the security of data mining, the IoT data were
classified by differential privacy clustering, and the false data features
of IoT were detected efficiently. Finally, data mining was completed by
combining data fusion and a Bayesian classifier. Experimental results
demonstrated that the accuracy of the proposed method is over 90%, the time
of data fusion is shorter, the time of data mining is shorter, the
precision is higher, and the false alarm rate is lower than 5%. 

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.88-96
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=405&page_url=Cur
rent_Issues> 

 

[Paper Four]

- Title: Design of Intelligent Information Monitoring System for
Distribution Network and Adjustment of Alarm Threshold

- Authors: Xiang Ma, Jianye Cui, Zhongming Xiang, Haoliang Du, and Jianfeng
Huang

- Keyword: Machine learning; Distribution network; Intelligent monitoring;
Information system; Alarm threshold

 

- Abstract

In the current distribution network information monitoring system, there
are many false alarm information, which forms redundant interference to the
fault alarm threshold, and it is difficult to ensure the alarm accuracy of
the monitoring system. The distribution network intelligent information
monitoring system and the alarm threshold adjustment method based on
machine learning are designed, with the physical layer of the system
designed to collect the operation status information of each line and
equipment of the distribution network according to various sensors, and
transfer it to the data layer. The data layer extracts, processes, and
classifies the received information, stores it in the database, obtains the
abnormal information in the information base, and adjusts the alarm
threshold based on the fuzzy clustering method in machine learning,
realizing intelligent monitoring of distribution network. The test results
show that the detection performance of abnormal information is good, the
abnormal information in the data can be obtained accurately, the clustering
of the target category of abnormal information can be completed according
to the eigenvalue, and has a good threshold adaptive adjustment ability, to
maximize the balance between human, machine, and power grid operation state
in the process of distribution network monitoring information, ensure real-
time and reliable monitoring and alarm results. 

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.97-104
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=406&page_url=Cur
rent_Issues> 

 

[Paper Five]

- Title: Regularized Convolutional Neural Network for Highly Effective
Parallel Processing

- Authors: Sang-Soo Park and Ki-Seok Chung

- Keyword: Heterogenous system; GPGPU; Parallel processing; OCR; Diverse
branch

 

- Abstract

Convolutional neural network (CNN) has been adopted in various areas. Using
graphics processing unit (GPU), speed improvement can be achieved on CNN,
and many studies have proposed such acceleration methods. However,
parallelizing the CNN on GPU is not straightforward because there are
irregular characteristics in generating output feature maps.in typical CNN
models. In this paper, we propose a method that maximizes the utilization
of GPU by modifying convolution combinations of a well-known CNN network,
LeNet-5. Our regularized implementation on a heterogeneous system has
achieved an improvement of up to 37.26 times in convolution and sub-
sampling layers. Further, an energy consumption reduction of up to 26.40
times is achieved. 

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.105-112
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=407&page_url=Cur
rent_Issues> 

 

[Paper Six]

- Title: Review of Optimal Convolutional Neural Network Accelerator
Platforms for Mobile Devices

- Authors: Hyun Kim

- Keyword: Convolutional neural networks; Mobile device; Network
compression; Hardware accelerator; Lowpower

 

- Abstract

In recent years, convolutional neural networks (CNNs) have achieved
remarkable performance enhancement, and researchers have endeavored to use
CNN applications on power-constrained mobile devices. Accordingly, low-
power and high-performance CNN accelerators for mobile devices are
receiving significant attention. This paper presents the overall process of
designing optimal CNN accelerator platforms for mobile devices based on
algorithm, architecture, and memory system co-design while introducing
various existing studies related to specific research fields. 

To obtain a copy of the entire article, click on the link below.
JCSE, vol. 16, no. 2, pp.113-119
<http://jcse.kiise.org/PublishedPaper/year_abstract.asp?idx=408&page_url=Cur
rent_Issues> 

 

[Call For Papers]

Journal of Computing Science and Engineering (JCSE), published by the
Korean Institute of Information Scientists and Engineers (KIISE) is devoted
to the timely dissemination of novel results and discussions on all aspects
of computing science and engineering, divided into Foundations, Software &
Applications, and Systems & Architecture. Papers are solicited in all areas
of computing science and engineering. See JCSE home page at
http://jcse.kiise.org <http://jcse.kiise.org/>  for the subareas.

The journal publishes regularly submitted papers, invited papers, selected
best papers from reputable conferences and workshops, and thematic issues
that address hot research topics. Potential authors are invited to submit
their manuscripts electronically, prepared in PDF files, through
<http://mc.manuscriptcentral.com/jcse>
http://mc.manuscriptcentral.com/jcse, where ScholarOne is used for on-line
submission and review. Authors are especially encouraged to submit papers
of around 10 but not more than 30 double-spaced pages in twelve point type.
The corresponding author's full postal and e-mail addresses, telephone and
FAX numbers as well as current affiliation information must be given on the
manuscript. Further inquiries are welcome at JCSE Editorial Office,
<mailto:office at kiise.org> office at kiise.org (phone: +82-2-588-9240; FAX: +82-
2-521-1352).

 



More information about the AISWorld mailing list