<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:cc="http://web.resource.org/cc/" xmlns:prism="http://prismstandard.org/namespaces/basic/2.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:admin="http://webns.net/mvcb/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel rdf:about="https://www.oaepress.com/ir">
    <title/>
    <description>Latest open access articles published in Cancers at https://www.oaepress.com/ir</description>
    <link>https://www.oaepress.com/ir</link>
    <admin:generatorAgent rdf:resource="https://www.oaepress.com/ir"/>
    <admin:errorReportsTo rdf:resource="mailto:editorial@intellrobot.com"/>
    <dc:publisher>OAE Publishing Inc.</dc:publisher>
    <dc:language>en</dc:language>
    <dc:rights>Creative Commons Attribution (CC-BY)</dc:rights>
    <prism:copyright>OAE Publishing Inc.</prism:copyright>
    <prism:rightsAgent>editorial@intellrobot.com</prism:rightsAgent>
    <image rdf:resource="https://i.oaes.cc/upload/journal_logo/ir.png"/>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.09"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.07"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.08"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.06"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.05"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.04"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.03"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.02"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ir.2026.01"/>
      </rdf:Seq>
    </items>
    <cc:license rdf:resource="https://creativecommons.org/licenses/by/4.0/"/>
  </channel>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.09">
    <title>Suspension parameter identification method for rail transit vehicles using an AO-GRBF surrogate model and non-dominated sorting genetic algorithm</title>
    <link>https://www.oaepublish.com/articles/ir.2026.09</link>
    <description>&lt;p&gt;The suspension systems of rail transit vehicles are crucial components that connect the vehicle body to the wheelsets, designed to reduce vibrations and shocks induced by track irregularities. During extended service periods, suspension parameters such as stiffness and damping coefficients are inevitably altered due to material aging and temperature fluctuations, rendering vehicle control strategies based on original design values ineffective. This leads to increased vibrations, hunting instability, and potential safety hazards during operation. Therefore, a suspension parameter identification method is proposed that combines an adaptively optimized Gaussian radial basis function (AO-GRBF) surrogate model with the non-dominated sorting genetic algorithm II (NSGA-II) to address these challenges. First, a mechanism- and data-driven AO-GRBF model is constructed to approximate the nonlinear relationship between suspension parameters and vehicle vibration responses, thereby overcoming the high computational cost associated with conventional multibody dynamics models. Then, the NSGA-II algorithm is employed to identify optimal suspension parameters by minimizing the deviation between AO-GRBF surrogate predictions and field-measured responses. Validation using field measurements indicates that the proposed method outperforms existing approaches, such as the radial basis function-high-dimensional model representation (RBF-HDMR) method and the long short-term memory (LSTM) method, in terms of correlation and error metrics related to lateral and vertical vibration accelerations.&lt;/p&gt;</description>
    <pubDate>1774828800</pubDate>
    <content:encoded><![CDATA[<p><b>Suspension parameter identification method for rail transit vehicles using an AO-GRBF surrogate model and non-dominated sorting genetic algorithm</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.09">doi: 10.20517/ir.2026.09</a></p><p>Authors: Shiyi Jiang,Jianhua Liu,Simon X. Yang</p><p><p>The suspension systems of rail transit vehicles are crucial components that connect the vehicle body to the wheelsets, designed to reduce vibrations and shocks induced by track irregularities. During extended service periods, suspension parameters such as stiffness and damping coefficients are inevitably altered due to material aging and temperature fluctuations, rendering vehicle control strategies based on original design values ineffective. This leads to increased vibrations, hunting instability, and potential safety hazards during operation. Therefore, a suspension parameter identification method is proposed that combines an adaptively optimized Gaussian radial basis function (AO-GRBF) surrogate model with the non-dominated sorting genetic algorithm II (NSGA-II) to address these challenges. First, a mechanism- and data-driven AO-GRBF model is constructed to approximate the nonlinear relationship between suspension parameters and vehicle vibration responses, thereby overcoming the high computational cost associated with conventional multibody dynamics models. Then, the NSGA-II algorithm is employed to identify optimal suspension parameters by minimizing the deviation between AO-GRBF surrogate predictions and field-measured responses. Validation using field measurements indicates that the proposed method outperforms existing approaches, such as the radial basis function-high-dimensional model representation (RBF-HDMR) method and the long short-term memory (LSTM) method, in terms of correlation and error metrics related to lateral and vertical vibration accelerations.</p></p>]]></content:encoded>
    <dc:title>Suspension parameter identification method for rail transit vehicles using an AO-GRBF surrogate model and non-dominated sorting genetic algorithm</dc:title>
    <dc:creator>Shiyi Jiang</dc:creator>
    <dc:creator>Jianhua Liu</dc:creator>
    <dc:creator>Simon X. Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.09</dc:identifier>
    <dc:source/>
    <dc:date>1774828800</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1774828800</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Research Article</prism:section>
    <prism:startingPage>163</prism:startingPage>
    <prism:doi>10.20517/ir.2026.09</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.09</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.07">
    <title>Optimising human-robot collaborative teleoperation using adaptive fuzzy logic control and real-time motion intention estimation</title>
    <link>https://www.oaepublish.com/articles/ir.2026.07</link>
    <description>&lt;p&gt;Human-robot collaborative (HRC) teleoperation requires seamless integration of intention understanding and adaptive control to achieve natural, efficient, and reliable remote manipulation. Existing tele-operation models (TOMs) suffer from limited intention-prediction capabilities, static control parameters, and inadequate adaptation to dynamic operational conditions, resulting in reduced task performance and increased cognitive burden for the operator. The proposed TOM that combines real-time motion intention estimation using long short-term memory (LSTM) + adaptive fuzzy logic control to enhance human-robot collaboration. The proposed TOM leverages multimodal bio-signals, including electromyography, inertial measurement units, and joint kinematics, to decode operator intentions via temporal feature extraction and sequential classification. The LSTM-based classifier processes normalised feature vectors to predict discrete motion intentions with 91.4% accuracy across varying task complexities. Experimental validation using a 6-degree-of-freedom collaborative manipulator and 12 human participants demonstrates significant performance improvements over traditional TOM. The integrated system achieved a 93.5% task-completion success rate, 89% faster execution times, a 60% improvement in placement accuracy, and a 47% reduction in operator mental workload across low, moderate, and high-complexity manipulation tasks. Statistical analysis confirms highly significant improvements (&lt;i&gt;P&lt;/i&gt; &lt; 0.001) with large effect sizes across all performance metrics. The proposed model addresses fundamental limitations in HRC teleoperation by providing temporally-aware intention recognition and context-sensitive adaptive control, enabling more natural and efficient collaborative manipulation in remote and hazardous environments.&lt;/p&gt;</description>
    <pubDate>1774569600</pubDate>
    <content:encoded><![CDATA[<p><b>Optimising human-robot collaborative teleoperation using adaptive fuzzy logic control and real-time motion intention estimation</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.07">doi: 10.20517/ir.2026.07</a></p><p>Authors: Nabeel S. Alsharafa,Karthik Elangovan,L. Arulmozhiselvan,Rajendra Kumar Ganiya,Aseel Smerat,Firas Tayseer Ayasrah,M Mary Victoria Florence,Sudhakar Sengan</p><p><p>Human-robot collaborative (HRC) teleoperation requires seamless integration of intention understanding and adaptive control to achieve natural, efficient, and reliable remote manipulation. Existing tele-operation models (TOMs) suffer from limited intention-prediction capabilities, static control parameters, and inadequate adaptation to dynamic operational conditions, resulting in reduced task performance and increased cognitive burden for the operator. The proposed TOM that combines real-time motion intention estimation using long short-term memory (LSTM) + adaptive fuzzy logic control to enhance human-robot collaboration. The proposed TOM leverages multimodal bio-signals, including electromyography, inertial measurement units, and joint kinematics, to decode operator intentions via temporal feature extraction and sequential classification. The LSTM-based classifier processes normalised feature vectors to predict discrete motion intentions with 91.4% accuracy across varying task complexities. Experimental validation using a 6-degree-of-freedom collaborative manipulator and 12 human participants demonstrates significant performance improvements over traditional TOM. The integrated system achieved a 93.5% task-completion success rate, 89% faster execution times, a 60% improvement in placement accuracy, and a 47% reduction in operator mental workload across low, moderate, and high-complexity manipulation tasks. Statistical analysis confirms highly significant improvements (<i>P</i> &lt; 0.001) with large effect sizes across all performance metrics. The proposed model addresses fundamental limitations in HRC teleoperation by providing temporally-aware intention recognition and context-sensitive adaptive control, enabling more natural and efficient collaborative manipulation in remote and hazardous environments.</p></p>]]></content:encoded>
    <dc:title>Optimising human-robot collaborative teleoperation using adaptive fuzzy logic control and real-time motion intention estimation</dc:title>
    <dc:creator>Nabeel S. Alsharafa</dc:creator>
    <dc:creator>Karthik Elangovan</dc:creator>
    <dc:creator>L. Arulmozhiselvan</dc:creator>
    <dc:creator>Rajendra Kumar Ganiya</dc:creator>
    <dc:creator>Aseel Smerat</dc:creator>
    <dc:creator>Firas Tayseer Ayasrah</dc:creator>
    <dc:creator>M Mary Victoria Florence</dc:creator>
    <dc:creator>Sudhakar Sengan</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.07</dc:identifier>
    <dc:source/>
    <dc:date>1774569600</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1774569600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Research Article</prism:section>
    <prism:startingPage>120</prism:startingPage>
    <prism:doi>10.20517/ir.2026.07</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.07</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.08">
    <title>Neurodynamics- and observer-based distributed robust formation control for mobile robots</title>
    <link>https://www.oaepublish.com/articles/ir.2026.08</link>
    <description>&lt;p&gt;The formation control of multiple robot systems presents significant challenges due to practical constraints such as disturbances, speed discontinuities, velocity constraints, and incomplete state information. This paper introduces a novel biologically inspired control scheme that effectively addresses these challenges. First, utilizing the cascade design technique, a distributed estimator is presented to provide smooth estimates of the leader’s state without requiring derivative information. Subsequently, a nonlinear state estimator is proposed to provide accurate estimates of both states and disturbances. After that, a biologically inspired kinematic controller is developed that effectively resolves the speed surge and velocity constraint issues. Following the kinematic control design, a robust dynamic controller is developed based on the observed state to enhance robustness against disturbances. Finally, extensive simulation studies validate the effectiveness of the proposed approach and verify the theoretical results.&lt;/p&gt;</description>
    <pubDate>1774569600</pubDate>
    <content:encoded><![CDATA[<p><b>Neurodynamics- and observer-based distributed robust formation control for mobile robots</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.08">doi: 10.20517/ir.2026.08</a></p><p>Authors: Zhe Xu,Zhihui Xia,Weigang Li,Simon X. Yang</p><p><p>The formation control of multiple robot systems presents significant challenges due to practical constraints such as disturbances, speed discontinuities, velocity constraints, and incomplete state information. This paper introduces a novel biologically inspired control scheme that effectively addresses these challenges. First, utilizing the cascade design technique, a distributed estimator is presented to provide smooth estimates of the leader’s state without requiring derivative information. Subsequently, a nonlinear state estimator is proposed to provide accurate estimates of both states and disturbances. After that, a biologically inspired kinematic controller is developed that effectively resolves the speed surge and velocity constraint issues. Following the kinematic control design, a robust dynamic controller is developed based on the observed state to enhance robustness against disturbances. Finally, extensive simulation studies validate the effectiveness of the proposed approach and verify the theoretical results.</p></p>]]></content:encoded>
    <dc:title>Neurodynamics- and observer-based distributed robust formation control for mobile robots</dc:title>
    <dc:creator>Zhe Xu</dc:creator>
    <dc:creator>Zhihui Xia</dc:creator>
    <dc:creator>Weigang Li</dc:creator>
    <dc:creator>Simon X. Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.08</dc:identifier>
    <dc:source/>
    <dc:date>1774569600</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1774569600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Research Article</prism:section>
    <prism:startingPage>148</prism:startingPage>
    <prism:doi>10.20517/ir.2026.08</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.08</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.06">
    <title>Model predictive tracking control based on adaptive sliding mode constraints for unmanned underwater vehicles</title>
    <link>https://www.oaepublish.com/articles/ir.2026.06</link>
    <description>&lt;p&gt;This study presents an improved model predictive control (MPC) approach for unmanned underwater vehicle trajectory tracking, specifically in an environment with ocean current disturbance. The proposed control strategy consists mainly of two MPC frameworks. Each MPC framework additionally attaches a nonlinear constraint used to further optimize the results. The constraint of the first part uses the Lyapunov direct method, while the constraint in the second part is based on the adaptive sliding mode controller, which has a decisive impact on the performance of the whole controller. These constraints give the system the ability to optimize the force output, increase the robustness, and reduce the tracking error. To evaluate the performance of the proposed controller, simulation experiments are conducted, comparing it with commonly used controllers. The results show the characteristics of the proposed method, including stability in the presence of undetectable disturbances and the advantage of effectively mitigating thrust saturation and oscillation caused by motion coupling.&lt;/p&gt;</description>
    <pubDate>1772236800</pubDate>
    <content:encoded><![CDATA[<p><b>Model predictive tracking control based on adaptive sliding mode constraints for unmanned underwater vehicles</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.06">doi: 10.20517/ir.2026.06</a></p><p>Authors: Yifeng Zhang,Daqi Zhu,Mingzhi Chen,Simon X. Yang</p><p><p>This study presents an improved model predictive control (MPC) approach for unmanned underwater vehicle trajectory tracking, specifically in an environment with ocean current disturbance. The proposed control strategy consists mainly of two MPC frameworks. Each MPC framework additionally attaches a nonlinear constraint used to further optimize the results. The constraint of the first part uses the Lyapunov direct method, while the constraint in the second part is based on the adaptive sliding mode controller, which has a decisive impact on the performance of the whole controller. These constraints give the system the ability to optimize the force output, increase the robustness, and reduce the tracking error. To evaluate the performance of the proposed controller, simulation experiments are conducted, comparing it with commonly used controllers. The results show the characteristics of the proposed method, including stability in the presence of undetectable disturbances and the advantage of effectively mitigating thrust saturation and oscillation caused by motion coupling.</p></p>]]></content:encoded>
    <dc:title>Model predictive tracking control based on adaptive sliding mode constraints for unmanned underwater vehicles</dc:title>
    <dc:creator>Yifeng Zhang</dc:creator>
    <dc:creator>Daqi Zhu</dc:creator>
    <dc:creator>Mingzhi Chen</dc:creator>
    <dc:creator>Simon X. Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.06</dc:identifier>
    <dc:source/>
    <dc:date>1772236800</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1772236800</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Research Article</prism:section>
    <prism:startingPage>101</prism:startingPage>
    <prism:doi>10.20517/ir.2026.06</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.06</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.05">
    <title>Embodied artificial intelligence as a paradigm shift for human–robot collaboration</title>
    <link>https://www.oaepublish.com/articles/ir.2026.05</link>
    <description>&lt;p&gt;Human–robot collaboration (HRC) has traditionally relied on instruction-driven paradigms in which humans specify goals and robots execute predefined tasks. Recent advances in embodied artificial intelligence (Embodied AI) challenge this model by grounding intelligence in physical embodiment and continuous interaction with the environment. This editorial positions Embodied AI as a paradigm shift in HRC, redefining collaboration as a physically interactive and mutually adaptive process. It examines the key challenges introduced by this shift and outlines emerging directions for future embodied HRC.&lt;/p&gt;</description>
    <pubDate>1772064000</pubDate>
    <content:encoded><![CDATA[<p><b>Embodied artificial intelligence as a paradigm shift for human–robot collaboration</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.05">doi: 10.20517/ir.2026.05</a></p><p>Authors: Junfei Li,Simon X. Yang</p><p><p>Human–robot collaboration (HRC) has traditionally relied on instruction-driven paradigms in which humans specify goals and robots execute predefined tasks. Recent advances in embodied artificial intelligence (Embodied AI) challenge this model by grounding intelligence in physical embodiment and continuous interaction with the environment. This editorial positions Embodied AI as a paradigm shift in HRC, redefining collaboration as a physically interactive and mutually adaptive process. It examines the key challenges introduced by this shift and outlines emerging directions for future embodied HRC.</p></p>]]></content:encoded>
    <dc:title>Embodied artificial intelligence as a paradigm shift for human–robot collaboration</dc:title>
    <dc:creator>Junfei Li</dc:creator>
    <dc:creator>Simon X. Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.05</dc:identifier>
    <dc:source/>
    <dc:date>1772064000</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1772064000</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Editorial</prism:section>
    <prism:startingPage>97</prism:startingPage>
    <prism:doi>10.20517/ir.2026.05</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.05</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.04">
    <title>Modeling and prediction of the ionosphere with deep learning: a review</title>
    <link>https://www.oaepublish.com/articles/ir.2026.04</link>
    <description>&lt;p&gt;The ionosphere plays a crucial role in the transmission and propagation of space signals. As a component of the upper atmosphere, it exhibits distinct spatio-temporal variations and is influenced by solar and geomagnetic activities. Accurately modeling and predicting the ionosphere remains a significant challenge. Recent advancements in deep learning techniques have provided valuable insights into these challenges, offering new approaches for spatio-temporal ionospheric modeling and prediction. By integrating multiple observations from both space-borne and ground-based stations, high-resolution digital models of the ionosphere can be constructed using convolutional and recurrent neural networks. This paper reviews the recent progress in ionospheric modeling and prediction using deep learning networks, discusses the advantages of deep learning models over traditional empirical models, and outlines future directions to address the remaining challenges in this field.&lt;/p&gt;</description>
    <pubDate>1770940800</pubDate>
    <content:encoded><![CDATA[<p><b>Modeling and prediction of the ionosphere with deep learning: a review</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.04">doi: 10.20517/ir.2026.04</a></p><p>Authors: Yang Liu,Kunlin Yang,Lingfeng Sun,Jinling Wang,Artem Smirnov,Chao Xiong</p><p><p>The ionosphere plays a crucial role in the transmission and propagation of space signals. As a component of the upper atmosphere, it exhibits distinct spatio-temporal variations and is influenced by solar and geomagnetic activities. Accurately modeling and predicting the ionosphere remains a significant challenge. Recent advancements in deep learning techniques have provided valuable insights into these challenges, offering new approaches for spatio-temporal ionospheric modeling and prediction. By integrating multiple observations from both space-borne and ground-based stations, high-resolution digital models of the ionosphere can be constructed using convolutional and recurrent neural networks. This paper reviews the recent progress in ionospheric modeling and prediction using deep learning networks, discusses the advantages of deep learning models over traditional empirical models, and outlines future directions to address the remaining challenges in this field.</p></p>]]></content:encoded>
    <dc:title>Modeling and prediction of the ionosphere with deep learning: a review</dc:title>
    <dc:creator>Yang Liu</dc:creator>
    <dc:creator>Kunlin Yang</dc:creator>
    <dc:creator>Lingfeng Sun</dc:creator>
    <dc:creator>Jinling Wang</dc:creator>
    <dc:creator>Artem Smirnov</dc:creator>
    <dc:creator>Chao Xiong</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.04</dc:identifier>
    <dc:source/>
    <dc:date>1770940800</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1770940800</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage/>
    <prism:doi>10.20517/ir.2026.04</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.04</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.03">
    <title>Key technologies of bionic inchworm robots: a survey</title>
    <link>https://www.oaepublish.com/articles/ir.2026.03</link>
    <description>&lt;p&gt;The bionic inchworm robot is known for its flexible and adaptable locomotion and has attracted growing interest in agriculture, forestry, and infrastructure inspection. This paper reviews global research on such robots, focusing on actuation mechanisms, attachment strategies, kinematic modeling, control methods and locomotion performance. By systematically comparing existing studies, it summarizes key technologies, identifies current challenges and outlines future research directions. The goal is to provide a clear perspective that supports further advances in inchworm-inspired robotic systems.&lt;/p&gt;</description>
    <pubDate>1770854400</pubDate>
    <content:encoded><![CDATA[<p><b>Key technologies of bionic inchworm robots: a survey</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.03">doi: 10.20517/ir.2026.03</a></p><p>Authors: Zhiwei Yu,Shuoyan Ma,Qian Zhang,Zhiyuan Liu,Yixing Shi,Muyuan Li,Zhengxin Yu</p><p><p>The bionic inchworm robot is known for its flexible and adaptable locomotion and has attracted growing interest in agriculture, forestry, and infrastructure inspection. This paper reviews global research on such robots, focusing on actuation mechanisms, attachment strategies, kinematic modeling, control methods and locomotion performance. By systematically comparing existing studies, it summarizes key technologies, identifies current challenges and outlines future research directions. The goal is to provide a clear perspective that supports further advances in inchworm-inspired robotic systems.</p></p>]]></content:encoded>
    <dc:title>Key technologies of bionic inchworm robots: a survey</dc:title>
    <dc:creator>Zhiwei Yu</dc:creator>
    <dc:creator>Shuoyan Ma</dc:creator>
    <dc:creator>Qian Zhang</dc:creator>
    <dc:creator>Zhiyuan Liu</dc:creator>
    <dc:creator>Yixing Shi</dc:creator>
    <dc:creator>Muyuan Li</dc:creator>
    <dc:creator>Zhengxin Yu</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.03</dc:identifier>
    <dc:source/>
    <dc:date>1770854400</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1770854400</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage/>
    <prism:doi>10.20517/ir.2026.03</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.03</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.02">
    <title>Subterranean roadway deformation detection based on LiDAR scanning and fusion filtering</title>
    <link>https://www.oaepublish.com/articles/ir.2026.02</link>
    <description>&lt;p&gt;Underground engineering is becoming increasingly important in modern urban construction and mine development. However, the shape of underground roadways may deform elastically or plastically due to geological conditions and accident loads, a phenomenon that cannot be ignored. Therefore, this paper proposes a roadway deformation detection method based on laser scanning. First, the working principle of the point cloud denoising and downsampling method is explained. To overcome the limitations of this method, the paper presents a point cloud denoising approach that combines statistical and median filtering. Additionally, it introduces a voxelised grid-downsampling technique based on density constraints and the centre of gravity. Next, the bidirectional projection method is used to determine the roadway’s central axis. Then, CloudCompare point cloud processing software is used to segment the point cloud, extract the roadway section, and fit a contour curve. Finally, the methods for extracting roadway deformation from processed point cloud data and for detecting and analysing it are introduced. Experiments on roadway deformation detection are conducted on an inspection robot experimental platform to verify the feasibility of the overall scheme. Experimental results indicate that the measurement error of light detection and ranging scanning for tunnel contour is less than 2 mm.&lt;/p&gt;</description>
    <pubDate>1769040000</pubDate>
    <content:encoded><![CDATA[<p><b>Subterranean roadway deformation detection based on LiDAR scanning and fusion filtering</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.02">doi: 10.20517/ir.2026.02</a></p><p>Authors: Yuming Cui,Guozheng Yang,Yuanyuan Dai,Kewen Yuan,Xiaohui Liu</p><p><p>Underground engineering is becoming increasingly important in modern urban construction and mine development. However, the shape of underground roadways may deform elastically or plastically due to geological conditions and accident loads, a phenomenon that cannot be ignored. Therefore, this paper proposes a roadway deformation detection method based on laser scanning. First, the working principle of the point cloud denoising and downsampling method is explained. To overcome the limitations of this method, the paper presents a point cloud denoising approach that combines statistical and median filtering. Additionally, it introduces a voxelised grid-downsampling technique based on density constraints and the centre of gravity. Next, the bidirectional projection method is used to determine the roadway’s central axis. Then, CloudCompare point cloud processing software is used to segment the point cloud, extract the roadway section, and fit a contour curve. Finally, the methods for extracting roadway deformation from processed point cloud data and for detecting and analysing it are introduced. Experiments on roadway deformation detection are conducted on an inspection robot experimental platform to verify the feasibility of the overall scheme. Experimental results indicate that the measurement error of light detection and ranging scanning for tunnel contour is less than 2 mm.</p></p>]]></content:encoded>
    <dc:title>Subterranean roadway deformation detection based on LiDAR scanning and fusion filtering</dc:title>
    <dc:creator>Yuming Cui</dc:creator>
    <dc:creator>Guozheng Yang</dc:creator>
    <dc:creator>Yuanyuan Dai</dc:creator>
    <dc:creator>Kewen Yuan</dc:creator>
    <dc:creator>Xiaohui Liu</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.02</dc:identifier>
    <dc:source/>
    <dc:date>1769040000</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1769040000</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Research Article</prism:section>
    <prism:startingPage>19</prism:startingPage>
    <prism:doi>10.20517/ir.2026.02</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.02</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ir.2026.01">
    <title>AI-empowered intelligence in industrial robotics: technologies, challenges, and emerging trends</title>
    <link>https://www.oaepublish.com/articles/ir.2026.01</link>
    <description>&lt;p&gt;Artificial intelligence (AI) is profoundly reshaping the technological framework of industrial robotics, driving its transition from pre-programmed automation to autonomous, adaptive agents. This paper systematically reviews the key advancements of AI across three core dimensions of intelligence: perception, decision-making, and execution. Analysis indicates that AI is propelling industrial robots from tools executing predefined tasks towards intelligent partners capable of adapting to unstructured environments, autonomously planning amid dynamic changes, and engaging in nuanced interactions with the physical world. This evolution reveals a shift from optimizing specific skills towards developing rudimentary task-level cognitive reasoning capabilities. Nevertheless, fundamental challenges persist for industrial-scale deployment, including model generalization capabilities, long-term robustness, and human-machine trust. Collectively, these advancements are shaping a new generation of intelligent industrial robotic systems that are more adaptable and capable of deeper collaboration with humans.&lt;/p&gt;</description>
    <pubDate>1768953600</pubDate>
    <content:encoded><![CDATA[<p><b>AI-empowered intelligence in industrial robotics: technologies, challenges, and emerging trends</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ir.2026.01">doi: 10.20517/ir.2026.01</a></p><p>Authors: Yifan Chen,Tao Ren,Yujia Li,Gang Jiang,Qingyou Liu,Yonghua Chen,Simon X. Yang</p><p><p>Artificial intelligence (AI) is profoundly reshaping the technological framework of industrial robotics, driving its transition from pre-programmed automation to autonomous, adaptive agents. This paper systematically reviews the key advancements of AI across three core dimensions of intelligence: perception, decision-making, and execution. Analysis indicates that AI is propelling industrial robots from tools executing predefined tasks towards intelligent partners capable of adapting to unstructured environments, autonomously planning amid dynamic changes, and engaging in nuanced interactions with the physical world. This evolution reveals a shift from optimizing specific skills towards developing rudimentary task-level cognitive reasoning capabilities. Nevertheless, fundamental challenges persist for industrial-scale deployment, including model generalization capabilities, long-term robustness, and human-machine trust. Collectively, these advancements are shaping a new generation of intelligent industrial robotic systems that are more adaptable and capable of deeper collaboration with humans.</p></p>]]></content:encoded>
    <dc:title>AI-empowered intelligence in industrial robotics: technologies, challenges, and emerging trends</dc:title>
    <dc:creator>Yifan Chen</dc:creator>
    <dc:creator>Tao Ren</dc:creator>
    <dc:creator>Yujia Li</dc:creator>
    <dc:creator>Gang Jiang</dc:creator>
    <dc:creator>Qingyou Liu</dc:creator>
    <dc:creator>Yonghua Chen</dc:creator>
    <dc:creator>Simon X. Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ir.2026.01</dc:identifier>
    <dc:source/>
    <dc:date>1768953600</dc:date>
    <prism:publicationName/>
    <prism:publicationDate>1768953600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage>1</prism:startingPage>
    <prism:doi>10.20517/ir.2026.01</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ir.2026.01</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <cc:License rdf:about="https://creativecommons.org/licenses/by/4.0/">
    <cc:permits rdf:resource="https://creativecommons.org/ns#Reproduction"/>
    <cc:permits rdf:resource="https://creativecommons.org/ns#Distribution"/>
    <cc:permits rdf:resource="https://creativecommons.org/ns#DerivativeWorks"/>
  </cc:License>
</rdf:RDF>
