Page 164 - Read Online
P. 164

Tu et al. Soft Sci 2023;3:25  https://dx.doi.org/10.20517/ss.2023.15             Page 3 of 15































                Figure 1. Schematic showing multiple sensing modalities contributing to perception and cognition, indicating the pursuit of e-skin
                systems toward the next generation.

               machine learning algorithms for device-level multimodal perception. Moreover, integrating sensing and
               computing parts in a planar configuration may reduce the available space for detection of the surrounding
               physical environment and thus cause disturbance to signals. Novel three-dimensional stacking designs are
               needed for high communication bandwidth and low latency . With the deepening understanding of
                                                                     [35]
               neuroscience and the rapid advance of algorithms and devices, endowing artificial skin with the ability of
               multimodal perception becomes possible. Therefore, it is necessary to review the progress in this
               burgeoning field of e-skins at the appropriate time.


               This perspective attempts to unfold the recent landscape of e-skins with multimodal sensory fusion and
               some intriguing future trends for their development. In the first place, we briefly introduce the neurological
               mechanism of multisensory integration that happens in cerebral cortical networks so as to provide a
               theoretical basis for the fusion of multimodal sensors in e-skins fields. Burgeoning multifunctional wearable
               e-skin systems are summarized and categorized into three main subfields: (i) multimodal physical sensor
               systems; (ii) multimodal physical and electrophysiological sensor systems; and (iii) multimodal physical and
               chemical sensor systems. Self-decoupling materials and novel mechanisms suppressing the signal
               interference of multiple sensing modalities are discussed. Then, we discuss some state-of-the-art research on
               e-skin systems that use bottom-up and top-down approaches to fuse multisensory information. Future
               trends for e-skin systems with multimodal sensing and perceptual fusion will be explored in the end.

               NEUROLOGICAL BASIS OF MULTISENSORY INTEGRATION
               Receptors distributed throughout the body could detect and encode multimodal signals in terms of
               somatosensation (thermoreceptors, touch receptors, and nociceptors), vision (retina), audition (cochlea),
               olfaction (odorant receptors), and gustatory sensing (taste buds) [1,3,37,38] . Through afferent pathways, those
               encoded spike trains from multiple modalities are transmitted into the central nervous system, where the
               integration of multimodal information takes place [39,40] . As for multisensory perception fusion in cerebral
               cortices, bottom-up and top-down multisensory processing are two commonly discussed mechanisms. The
               bottom-up processing of multisensory stimuli can be further described as three main procedures:
   159   160   161   162   163   164   165   166   167   168   169