Industrial Robot Journal
CALL FOR PAPERS
for Special Issue on
Cross-Modal Perception for Industrial Robots

Cross-modal perception means using a combination of various sources and types of sensory data to gain a better understanding of the robot’s environment than could be with achieved from individual sensors. The key to cross-modal perception is to establish a unified representation between multi-modal data sources. Cross-modal learning has been extensively utilized in web search, and human-computer interaction, but finds little application in industrial robots. Currently, we can find lots of sensors in industrial robots, but the multi-modal integration and cross-modal transferring problems are still difficult. The first challenge stems from the heterogeneous gap: there exists significant feature space difference between modalities and direct comparison or matching between them is difficult. The second challenge stems from the weak pairing: it is difficult to establish the sample correspondence relationship between modalities. Those challenges incur many unsolved problems worth of research.

This special issue mainly focuses on cross-modal learning technology for modern industrial robots, solving both original theoretical development and engineering applications of cross-modal learning and transferring. We are soliciting original contributions of active researchers and practitioners from academia as well as industry.

Guest Editors:  
Dr. Huaping Liu (hpliu@tsinghua.edu.cn), Tsinghua University, P.R. China
Dr. Yuan Yuan, Brunel University.

When you submit your paper, please include a Letter to the Editor explaining it is for this Special Issue.

submission deadline 1st June 2018

IEEE Transactions on Automation Science and Engineering
Special Issue on
Active Perception for Industrial Intelligence

Information technologies are permeating all aspects of manufacturing systems as well as other fields, expediting the generation of industrial big data. Traditionally, the devices collected the sensor data from various sources and information fusion was then performed. This incurs great burdens for time and storage costs. Recently, more and more intelligent devices are equipped in the industrial environment. This provides more opportunities for better data collection and processing for industrial intelligence. Active perception technology, which performs control strategies on the data acquisition process, enables the devices to seamlessly integrate the perception and action to reach high-level goals rather than to accomplish low-level commands. It helps to select more useful information and may save the life of the sensors. However, there exist many unsolved challenging problems since the feedback is performed on complex processed sensory data, i.e., various extracted features. In addition, the performance evaluation of active perception is far from settled, since the benchmark dataset is difficult to establish.

This special issue mainly focuses on active perception technology for industrial intelligence, addressing both original algorithmic development and new applications of active perception. We are soliciting original contributions, of leading researchers and practitioners from academia as well as industry, which address a wide range of theoretical and application issues in active perception for industry intelligence. Topics for this special issue include, but are not limited to:

Architecture for industrial active perception
Active perception algorithms
Collaborative active perception
Multi-modal active perception
Self-configuration technology for industrial intelligence
Deep learning for active perception
Reinforcement learning for active perception
Robotic active perception for industrial intelligence
Benchmark for active perception
Industrial applications of active perception
Deadlines:
September 1, 2018: paper submission deadline.
December 1, 2018: completion of the first round review.
April 1, 2019: completion of the second round review.
July 1, 2019: final manuscripts due.
October 2019: tentative publication date.
Guest Editors:
Huaping Liu, Tsinghua University, hpliu@tsinghua.edu.cn
Nathan Lepora, University of Bristol, n.lepora@bristol.ac.uk
Andrea Cherubini, Université de Montpellier, andrea.cherubini@lirmm.fr