<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcterms="http://purl.org/dc/terms/" xmlns:cc="http://web.resource.org/cc/" xmlns:prism="http://prismstandard.org/namespaces/basic/2.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:admin="http://webns.net/mvcb/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel rdf:about="https://www.oaepress.com/ais">
    <title>Artificial Intelligence Surgery</title>
    <description>Latest open access articles published in Cancers at https://www.oaepress.com/ais</description>
    <link>https://www.oaepress.com/ais</link>
    <admin:generatorAgent rdf:resource="https://www.oaepress.com/ais"/>
    <admin:errorReportsTo rdf:resource="mailto:editorialoffice@aisjournal.net"/>
    <dc:publisher>OAE Publishing Inc.</dc:publisher>
    <dc:language>en</dc:language>
    <dc:rights>Creative Commons Attribution (CC-BY)</dc:rights>
    <prism:copyright>OAE Publishing Inc.</prism:copyright>
    <prism:rightsAgent>editorialoffice@aisjournal.net</prism:rightsAgent>
    <image rdf:resource="https://i.oaes.cc/upload/journal_logo/ais.png"/>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.120"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2026.12"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.70"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.76"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.93"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.113"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.26"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.67"/>
        <rdf:li rdf:resource="https://www.oaepublish.com/articles/ais.2025.68"/>
      </rdf:Seq>
    </items>
    <cc:license rdf:resource="https://creativecommons.org/licenses/by/4.0/"/>
  </channel>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.120">
    <title>A scoping review of artificial intelligence in living donor liver transplantation - current status and untapped potential</title>
    <link>https://www.oaepublish.com/articles/ais.2025.120</link>
    <description>&lt;p&gt;While the surgical technicalities of living donor liver transplantation (LDLT) have matured since its development several decades ago, clinical challenges remain in pre-transplantation and post-transplantation management. The ability of artificial intelligence (AI) to perform sophisticated analyses of complex non-linear relationships holds potential to aid clinical decision-making. This is particularly relevant in LDLT, where grafts are a precious resource within a dynamic setting of donor, recipient, and procedural factors that must be considered. Clinical issues of graft and patient survival, patient selection and stratification, survival predictors for expanded transplantation criteria, and post-transplantation outcomes remain relevant challenges that benefit from analysis with sophisticated AI models. This scoping review summarised 16 AI studies in pre- and post-transplantation assessment and transplant oncology, providing an overview of the current landscape and future directions for development.&lt;/p&gt;</description>
    <pubDate>1774915200</pubDate>
    <content:encoded><![CDATA[<p><b>A scoping review of artificial intelligence in living donor liver transplantation - current status and untapped potential</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.120">doi: 10.20517/ais.2025.120</a></p><p>Authors: Karin K. Y. Ho,Albert C. Y. Chan</p><p><p>While the surgical technicalities of living donor liver transplantation (LDLT) have matured since its development several decades ago, clinical challenges remain in pre-transplantation and post-transplantation management. The ability of artificial intelligence (AI) to perform sophisticated analyses of complex non-linear relationships holds potential to aid clinical decision-making. This is particularly relevant in LDLT, where grafts are a precious resource within a dynamic setting of donor, recipient, and procedural factors that must be considered. Clinical issues of graft and patient survival, patient selection and stratification, survival predictors for expanded transplantation criteria, and post-transplantation outcomes remain relevant challenges that benefit from analysis with sophisticated AI models. This scoping review summarised 16 AI studies in pre- and post-transplantation assessment and transplant oncology, providing an overview of the current landscape and future directions for development.</p></p>]]></content:encoded>
    <dc:title>A scoping review of artificial intelligence in living donor liver transplantation - current status and untapped potential</dc:title>
    <dc:creator>Karin K. Y. Ho</dc:creator>
    <dc:creator>Albert C. Y. Chan</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.120</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1774915200</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1774915200</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage>192</prism:startingPage>
    <prism:doi>10.20517/ais.2025.120</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.120</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2026.12">
    <title>Correction: AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics (&lt;i&gt;Art Int Surg&lt;/i&gt;. 2026;6:98-113. DOI:10.20517/ais.2025.113)</title>
    <link>https://www.oaepublish.com/articles/ais.2026.12</link>
    <description/>
    <pubDate>1774569600</pubDate>
    <content:encoded><![CDATA[<p><b>Correction: AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics (<i>Art Int Surg</i>. 2026;6:98-113. DOI:10.20517/ais.2025.113)</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2026.12">doi: 10.20517/ais.2026.12</a></p><p>Authors: Andrew Gumbs,Michele Diana,Karol Rawicz-Pruszyński,Gaya Spolverato,Isabella Frigerio,Mohammad Abu Hilal,Elisa Bannone,Roland Croner,Francesca Dal Mas,Belinda De Simone,Michael Friebe,Francesco Giovinazzo,S. Vincent Grasso,Takeaki Ishizawa,Konrad Karcz,Zain Khalpey,Luca Milone,Nouredin Messaoudi,M. Mahir Ozmen,Peter G. Passias,Niki Rashidian,Sharona Ross,Thomas Schnelldorfer,Amir Szold,Zbigniew Nawrat,Ibrahim Dagher, ,Mohammad Abu Hilal,Fabio Ausania,Elisa Bannone,Elena Bignami,Elie Chouillard,Maria Conticchio,Roland Croner,Ibrahim Dagher,Francesca Dal Mas,Belinda De Simone,Michele Diana,Marcello Di Martino,Mathieu D’Hondt,Gianfranco Donatelli,Ahmed EL Minawi,Michael Friebe,Isabella Frigerio,Michel Gagner,Vonetta George,Suzanne Gisbertz,Francesco Giovinazzo,Luca Gordini,Mustansar Ghanzafar,S. Vincent Grasso,Andrew Gumbs,Takeaki Ishizawa,Konrad Karcz,Stephen Kavic,Zain Khalpey,Michael Kreisel,Luca Milone,Nouredin Messaoudi,Leila Mureebe,Zbigniew Nawrat,Derek O’Reilly,M. Mahir Ozmen,Peter G. Passias,Silvana Perretta,Niki Rashidian,Gianluca Rompianesi,Sharona Ross,Thomas Schnelldorfer,Vivian Strong,Gaya Spolverato,Amir Szold,Martin Teraa,Gratia Tsai,Jordi Vidal-Jove,Karol Rawicz-Pruszyński,Brandon Valencia Coronel,Teodoros Veronesi,Taiga Wakabayashi,Heather Yeo</p><p></p>]]></content:encoded>
    <dc:title>Correction: AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics (&lt;i&gt;Art Int Surg&lt;/i&gt;. 2026;6:98-113. DOI:10.20517/ais.2025.113)</dc:title>
    <dc:creator>Andrew Gumbs</dc:creator>
    <dc:creator>Michele Diana</dc:creator>
    <dc:creator>Karol Rawicz-Pruszyński</dc:creator>
    <dc:creator>Gaya Spolverato</dc:creator>
    <dc:creator>Isabella Frigerio</dc:creator>
    <dc:creator>Mohammad Abu Hilal</dc:creator>
    <dc:creator>Elisa Bannone</dc:creator>
    <dc:creator>Roland Croner</dc:creator>
    <dc:creator>Francesca Dal Mas</dc:creator>
    <dc:creator>Belinda De Simone</dc:creator>
    <dc:creator>Michael Friebe</dc:creator>
    <dc:creator>Francesco Giovinazzo</dc:creator>
    <dc:creator>S. Vincent Grasso</dc:creator>
    <dc:creator>Takeaki Ishizawa</dc:creator>
    <dc:creator>Konrad Karcz</dc:creator>
    <dc:creator>Zain Khalpey</dc:creator>
    <dc:creator>Luca Milone</dc:creator>
    <dc:creator>Nouredin Messaoudi</dc:creator>
    <dc:creator>M. Mahir Ozmen</dc:creator>
    <dc:creator>Peter G. Passias</dc:creator>
    <dc:creator>Niki Rashidian</dc:creator>
    <dc:creator>Sharona Ross</dc:creator>
    <dc:creator>Thomas Schnelldorfer</dc:creator>
    <dc:creator>Amir Szold</dc:creator>
    <dc:creator>Zbigniew Nawrat</dc:creator>
    <dc:creator>Ibrahim Dagher</dc:creator>
    <dc:creator> </dc:creator>
    <dc:creator>Mohammad Abu Hilal</dc:creator>
    <dc:creator>Fabio Ausania</dc:creator>
    <dc:creator>Elisa Bannone</dc:creator>
    <dc:creator>Elena Bignami</dc:creator>
    <dc:creator>Elie Chouillard</dc:creator>
    <dc:creator>Maria Conticchio</dc:creator>
    <dc:creator>Roland Croner</dc:creator>
    <dc:creator>Ibrahim Dagher</dc:creator>
    <dc:creator>Francesca Dal Mas</dc:creator>
    <dc:creator>Belinda De Simone</dc:creator>
    <dc:creator>Michele Diana</dc:creator>
    <dc:creator>Marcello Di Martino</dc:creator>
    <dc:creator>Mathieu D’Hondt</dc:creator>
    <dc:creator>Gianfranco Donatelli</dc:creator>
    <dc:creator>Ahmed EL Minawi</dc:creator>
    <dc:creator>Michael Friebe</dc:creator>
    <dc:creator>Isabella Frigerio</dc:creator>
    <dc:creator>Michel Gagner</dc:creator>
    <dc:creator>Vonetta George</dc:creator>
    <dc:creator>Suzanne Gisbertz</dc:creator>
    <dc:creator>Francesco Giovinazzo</dc:creator>
    <dc:creator>Luca Gordini</dc:creator>
    <dc:creator>Mustansar Ghanzafar</dc:creator>
    <dc:creator>S. Vincent Grasso</dc:creator>
    <dc:creator>Andrew Gumbs</dc:creator>
    <dc:creator>Takeaki Ishizawa</dc:creator>
    <dc:creator>Konrad Karcz</dc:creator>
    <dc:creator>Stephen Kavic</dc:creator>
    <dc:creator>Zain Khalpey</dc:creator>
    <dc:creator>Michael Kreisel</dc:creator>
    <dc:creator>Luca Milone</dc:creator>
    <dc:creator>Nouredin Messaoudi</dc:creator>
    <dc:creator>Leila Mureebe</dc:creator>
    <dc:creator>Zbigniew Nawrat</dc:creator>
    <dc:creator>Derek O’Reilly</dc:creator>
    <dc:creator>M. Mahir Ozmen</dc:creator>
    <dc:creator>Peter G. Passias</dc:creator>
    <dc:creator>Silvana Perretta</dc:creator>
    <dc:creator>Niki Rashidian</dc:creator>
    <dc:creator>Gianluca Rompianesi</dc:creator>
    <dc:creator>Sharona Ross</dc:creator>
    <dc:creator>Thomas Schnelldorfer</dc:creator>
    <dc:creator>Vivian Strong</dc:creator>
    <dc:creator>Gaya Spolverato</dc:creator>
    <dc:creator>Amir Szold</dc:creator>
    <dc:creator>Martin Teraa</dc:creator>
    <dc:creator>Gratia Tsai</dc:creator>
    <dc:creator>Jordi Vidal-Jove</dc:creator>
    <dc:creator>Karol Rawicz-Pruszyński</dc:creator>
    <dc:creator>Brandon Valencia Coronel</dc:creator>
    <dc:creator>Teodoros Veronesi</dc:creator>
    <dc:creator>Taiga Wakabayashi</dc:creator>
    <dc:creator>Heather Yeo</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2026.12</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1774569600</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1774569600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Correction</prism:section>
    <prism:startingPage>188</prism:startingPage>
    <prism:doi>10.20517/ais.2026.12</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2026.12</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.70">
    <title>Applications of speech analysis in diseases’ assessment, prediction and diagnosis: a scoping review</title>
    <link>https://www.oaepublish.com/articles/ais.2025.70</link>
    <description>&lt;p&gt; &lt;b&gt;Background:&lt;/b&gt; Speech production is a coordinated physiological process and a vital digital biomarker for health assessment. Recent advances in artificial intelligence (AI), particularly in representation learning, have substantially expanded the application of speech analysis across diverse clinical domains.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Methods:&lt;/b&gt; This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). Five major bibliographic databases were systematically searched for studies published between 2015 and 2025. Eligible studies applied AI-driven speech analysis for clinical diagnosis or monitoring, while those lacking quantitative evaluation or sufficient methodological detail were excluded.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Results:&lt;/b&gt; A total of 124 studies were analyzed, covering neurological, psychiatric, and respiratory disorders. The field has transitioned from traditional machine learning with handcrafted features to deep learning and foundation models. Parkinson’s disease, Alzheimer’s disease, depression, and coronavirus disease 2019 (COVID-19) are the most frequently investigated conditions. The included studies were charted and synthesized to map disease coverage, methodological trends, and clinical application scenarios.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Conclusion:&lt;/b&gt; Speech analysis offers a non-invasive approach for early disease detection and remote monitoring in telemedicine. To support clinical translation, future research should prioritize model robustness and interpretability across diverse clinical populations.&lt;/p&gt;</description>
    <pubDate>1772236800</pubDate>
    <content:encoded><![CDATA[<p><b>Applications of speech analysis in diseases’ assessment, prediction and diagnosis: a scoping review</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.70">doi: 10.20517/ais.2025.70</a></p><p>Authors: Xi Xu,Ying Zhang,Qiufei Niu,Nianjiao Long,Jianqiang Li,Linna Zhao,Jian Yin,Jijiang Yang</p><p><p> <b>Background:</b> Speech production is a coordinated physiological process and a vital digital biomarker for health assessment. Recent advances in artificial intelligence (AI), particularly in representation learning, have substantially expanded the application of speech analysis across diverse clinical domains.</p><p> <b>Methods:</b> This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). Five major bibliographic databases were systematically searched for studies published between 2015 and 2025. Eligible studies applied AI-driven speech analysis for clinical diagnosis or monitoring, while those lacking quantitative evaluation or sufficient methodological detail were excluded.</p><p> <b>Results:</b> A total of 124 studies were analyzed, covering neurological, psychiatric, and respiratory disorders. The field has transitioned from traditional machine learning with handcrafted features to deep learning and foundation models. Parkinson’s disease, Alzheimer’s disease, depression, and coronavirus disease 2019 (COVID-19) are the most frequently investigated conditions. The included studies were charted and synthesized to map disease coverage, methodological trends, and clinical application scenarios.</p><p> <b>Conclusion:</b> Speech analysis offers a non-invasive approach for early disease detection and remote monitoring in telemedicine. To support clinical translation, future research should prioritize model robustness and interpretability across diverse clinical populations.</p></p>]]></content:encoded>
    <dc:title>Applications of speech analysis in diseases’ assessment, prediction and diagnosis: a scoping review</dc:title>
    <dc:creator>Xi Xu</dc:creator>
    <dc:creator>Ying Zhang</dc:creator>
    <dc:creator>Qiufei Niu</dc:creator>
    <dc:creator>Nianjiao Long</dc:creator>
    <dc:creator>Jianqiang Li</dc:creator>
    <dc:creator>Linna Zhao</dc:creator>
    <dc:creator>Jian Yin</dc:creator>
    <dc:creator>Jijiang Yang</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.70</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1772236800</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1772236800</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage/>
    <prism:doi>10.20517/ais.2025.70</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.70</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.76">
    <title>Surgical computer vision for intraoperative decision-support: a scoping review on performance metrics and readiness for real-time deployment</title>
    <link>https://www.oaepublish.com/articles/ais.2025.76</link>
    <description>&lt;p&gt; &lt;b&gt;Background:&lt;/b&gt; Real-time computer vision-based artificial intelligence (CV-AI) systems for surgical video analysis are rapidly advancing. Current evaluation strategies and clinical-readiness reporting, however, remain inconsistent. This scoping review mapped contemporary CV-AI task domains, performance metrics, and evidence of readiness for real-time intraoperative deployment within general surgery.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Methods:&lt;/b&gt; This study followed Joanna Briggs Institute methodology for scoping reviews, and was reported in accordance with PRISMA-ScR. Eligible studies were identified by systematic literature search of the MEDLINE, Embase, PubMed, and Scopus databases. All studies published between 1 June 2015 and 1 June 2025 were eligible.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Results:&lt;/b&gt; A total of 490 articles were screened, with 113 studies meeting the inclusion criteria after full-text review. Retrospective feasibility analyses predominated, with only 13 studies (12%) evaluating real-time intraoperative integration. Five task domains were identified (phase recognition, anatomy identification, action-event recognition, instrument tracking, and skill-assessment). Forty-one unique performance metrics were reported, with predominant use of discrimination-style summary measures (e.g., accuracy, recall, F1 score), and comparatively sparse reporting of class imbalance, boundary-aware (e.g., Hausdorff distance) or real-time workflow factors (e.g., latency/stability, interface design, surgeon feedback). External validation was described in 13 (12%) studies. Nine studies (8%) referenced artificial intelligence-specific reporting frameworks.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Conclusion:&lt;/b&gt; Surgical CV-AI is advancing technically, but remains predominantly at an early feasibility stage. Variability in current metric application and limited real-time clinical evaluation limit potential for comparability, applicability and widespread adoption. Standardised metrics, evaluation frameworks, prospective clinical trials, and collaborative end-user engagement are critical to translate conceptual promise to reliable real-time decision-support tools that support surgeon judgement and integrate seamlessly into routine operative workflows.&lt;/p&gt;</description>
    <pubDate>1772236800</pubDate>
    <content:encoded><![CDATA[<p><b>Surgical computer vision for intraoperative decision-support: a scoping review on performance metrics and readiness for real-time deployment</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.76">doi: 10.20517/ais.2025.76</a></p><p>Authors: Jayvee Buchanan,Saxon Connor,John Pearson,Bruce Carey-Smith,Tim Eglinton</p><p><p> <b>Background:</b> Real-time computer vision-based artificial intelligence (CV-AI) systems for surgical video analysis are rapidly advancing. Current evaluation strategies and clinical-readiness reporting, however, remain inconsistent. This scoping review mapped contemporary CV-AI task domains, performance metrics, and evidence of readiness for real-time intraoperative deployment within general surgery.</p><p> <b>Methods:</b> This study followed Joanna Briggs Institute methodology for scoping reviews, and was reported in accordance with PRISMA-ScR. Eligible studies were identified by systematic literature search of the MEDLINE, Embase, PubMed, and Scopus databases. All studies published between 1 June 2015 and 1 June 2025 were eligible.</p><p> <b>Results:</b> A total of 490 articles were screened, with 113 studies meeting the inclusion criteria after full-text review. Retrospective feasibility analyses predominated, with only 13 studies (12%) evaluating real-time intraoperative integration. Five task domains were identified (phase recognition, anatomy identification, action-event recognition, instrument tracking, and skill-assessment). Forty-one unique performance metrics were reported, with predominant use of discrimination-style summary measures (e.g., accuracy, recall, F1 score), and comparatively sparse reporting of class imbalance, boundary-aware (e.g., Hausdorff distance) or real-time workflow factors (e.g., latency/stability, interface design, surgeon feedback). External validation was described in 13 (12%) studies. Nine studies (8%) referenced artificial intelligence-specific reporting frameworks.</p><p> <b>Conclusion:</b> Surgical CV-AI is advancing technically, but remains predominantly at an early feasibility stage. Variability in current metric application and limited real-time clinical evaluation limit potential for comparability, applicability and widespread adoption. Standardised metrics, evaluation frameworks, prospective clinical trials, and collaborative end-user engagement are critical to translate conceptual promise to reliable real-time decision-support tools that support surgeon judgement and integrate seamlessly into routine operative workflows.</p></p>]]></content:encoded>
    <dc:title>Surgical computer vision for intraoperative decision-support: a scoping review on performance metrics and readiness for real-time deployment</dc:title>
    <dc:creator>Jayvee Buchanan</dc:creator>
    <dc:creator>Saxon Connor</dc:creator>
    <dc:creator>John Pearson</dc:creator>
    <dc:creator>Bruce Carey-Smith</dc:creator>
    <dc:creator>Tim Eglinton</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.76</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1772236800</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1772236800</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage>150</prism:startingPage>
    <prism:doi>10.20517/ais.2025.76</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.76</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.93">
    <title>Beyond generalist LLMs: building and validating domain-specific models with the SpAMCQA benchmark</title>
    <link>https://www.oaepublish.com/articles/ais.2025.93</link>
    <description>&lt;p&gt;&lt;b&gt;Aim:&lt;/b&gt; General-purpose Large Language Models (LLMs) exhibit significant limitations in high-stakes clinical domains such as spondyloarthritis (SpA) diagnosis, yet the absence of specialized evaluation tools precludes the quantification of these failures. This study aims to break this critical evaluation impasse and rigorously test the hypothesis that domain specialization is a necessity for achieving expert-level performance in complex medical diagnostics.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Methods:&lt;/b&gt; We employed a two-pronged experimental approach. First, we introduced the Spondyloarthritis Multiple-Choice Question Answering Benchmark (SpAMCQA), a comprehensive, expert-validated benchmark engineered to probe the nuanced diagnostic reasoning required for SpA. Second, to validate the domain specialization hypothesis, we developed the Spondyloarthritis Diagnosis Large Language Model (SpAD-LLM) by fine-tuning a foundation model on a curated corpus of SpA-specific clinical data. The efficacy of SpAD-LLM was then evaluated against leading generalist models, including Generative Pre-trained Transformer 4 (GPT-4), on the SpAMCQA testbed.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Results: &lt;/b&gt;On the SpAMCQA benchmark, our specialized SpAD-LLM achieved a state-of-the-art accuracy of 92.36%, decisively outperforming the 86.05% accuracy of the leading generalist model, GPT-4. This result provides the first empirical evidence on a purpose-built benchmark that generalist scaling alone is insufficient for mastering the specific inferential knowledge required for SpA diagnosis.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Conclusion:&lt;/b&gt; Our findings demonstrate that in high-stakes domains, domain specialization is not merely an incremental improvement but a categorical necessity. We release the SpAMCQA benchmark and full inference logs to the public, providing the community with a foundational evaluation toolkit, while positioning the SpAD-LLM series as a validated baseline to catalyze the development of truly expert-level medical artificial intelligence.&lt;/p&gt;</description>
    <pubDate>1770854400</pubDate>
    <content:encoded><![CDATA[<p><b>Beyond generalist LLMs: building and validating domain-specific models with the SpAMCQA benchmark</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.93">doi: 10.20517/ais.2025.93</a></p><p>Authors: Xiaojian Ji,Nianzhe Sun,Anan Wang,Jing Dong,Jiawen Hu,Jian Zhu,Feng Huang,Zhengbo Zhang,Kunpeng Li,Da Teng,Tao Li</p><p><p><b>Aim:</b> General-purpose Large Language Models (LLMs) exhibit significant limitations in high-stakes clinical domains such as spondyloarthritis (SpA) diagnosis, yet the absence of specialized evaluation tools precludes the quantification of these failures. This study aims to break this critical evaluation impasse and rigorously test the hypothesis that domain specialization is a necessity for achieving expert-level performance in complex medical diagnostics.</p><p><b>Methods:</b> We employed a two-pronged experimental approach. First, we introduced the Spondyloarthritis Multiple-Choice Question Answering Benchmark (SpAMCQA), a comprehensive, expert-validated benchmark engineered to probe the nuanced diagnostic reasoning required for SpA. Second, to validate the domain specialization hypothesis, we developed the Spondyloarthritis Diagnosis Large Language Model (SpAD-LLM) by fine-tuning a foundation model on a curated corpus of SpA-specific clinical data. The efficacy of SpAD-LLM was then evaluated against leading generalist models, including Generative Pre-trained Transformer 4 (GPT-4), on the SpAMCQA testbed.</p><p><b>Results: </b>On the SpAMCQA benchmark, our specialized SpAD-LLM achieved a state-of-the-art accuracy of 92.36%, decisively outperforming the 86.05% accuracy of the leading generalist model, GPT-4. This result provides the first empirical evidence on a purpose-built benchmark that generalist scaling alone is insufficient for mastering the specific inferential knowledge required for SpA diagnosis.</p><p><b>Conclusion:</b> Our findings demonstrate that in high-stakes domains, domain specialization is not merely an incremental improvement but a categorical necessity. We release the SpAMCQA benchmark and full inference logs to the public, providing the community with a foundational evaluation toolkit, while positioning the SpAD-LLM series as a validated baseline to catalyze the development of truly expert-level medical artificial intelligence.</p></p>]]></content:encoded>
    <dc:title>Beyond generalist LLMs: building and validating domain-specific models with the SpAMCQA benchmark</dc:title>
    <dc:creator>Xiaojian Ji</dc:creator>
    <dc:creator>Nianzhe Sun</dc:creator>
    <dc:creator>Anan Wang</dc:creator>
    <dc:creator>Jing Dong</dc:creator>
    <dc:creator>Jiawen Hu</dc:creator>
    <dc:creator>Jian Zhu</dc:creator>
    <dc:creator>Feng Huang</dc:creator>
    <dc:creator>Zhengbo Zhang</dc:creator>
    <dc:creator>Kunpeng Li</dc:creator>
    <dc:creator>Da Teng</dc:creator>
    <dc:creator>Tao Li</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.93</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1770854400</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1770854400</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Original Article</prism:section>
    <prism:startingPage>80</prism:startingPage>
    <prism:doi>10.20517/ais.2025.93</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.93</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.113">
    <title>AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics</title>
    <link>https://www.oaepublish.com/articles/ais.2025.113</link>
    <description>&lt;p&gt;This Consensus Statement was jointly developed by the Editorial Board Members of &lt;i&gt;Artificial Intelligence Surgery&lt;/i&gt; and the Artificial Intelligence Organization for the Next Generation of Surgeons (AIONS). The initiative began in February 2025 and proceeded through iterative drafting of definitions, online meetings, expert subgroup revisions, an online validation survey, an in-person Consensus Conference, and a final online meeting to confirm revisions and review the conference manuscript. Votes greater than or equal to 80 percent were considered validating. Definitions were sought for: (1) Surgery, (2) Endoluminal Surgery, (3) Percutaneous Surgery, (4) Robot, (5) Surgical Robot, (6) Robot-Assisted Surgery, (7) Telemanipulator Surgery, (8) Remote Surgery, (9) Collaborative Robotic (Cobotic) Surgery, (10) Robotic Surgery, (11) Artificial Intelligence Surgery, (12) Surgomics, (13) Surgical Multiomics, (14) Non-Invasive Surgery, (15) Digital Surgery, (16) Computer-Assisted Surgery, and (17) Cybersurgery. All candidate definitions achieved at least 80 percent approval in an online vote prior to the Consensus Conference. The in-person meeting occurred on 26 September 2025 at the Orto Botanico, University of Padova, Italy, where 11 definitions were ratified. The definition of Surgery was deemed premature and invalidated. Surgomics and Surgical Multiomics were determined to be distinct entities and were therefore revoted online after the meeting. Collaborative Robotics was clarified as requiring co-local presence of the surgeon and robot. Definitions for Percutaneous Surgery and Robot were amended and validated during a follow-up online vote on 11 November. Ultimately, all 17 definitions were validated. This Consensus provides terminology, rationale, and strategic direction for the surgical field as artificial intelligence, robotics, and data science reshape surgical practice. Future Consensus Conferences are planned to update definitions as the field evolves.&lt;/p&gt;</description>
    <pubDate>1770854400</pubDate>
    <content:encoded><![CDATA[<p><b>AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.113">doi: 10.20517/ais.2025.113</a></p><p>Authors: Andrew Gumbs,Michele Diana,Karol Rawicz-Pruszyński,Gaya Spolverato,Isabella Frigerio,Mohammad Abu Hilal,Elisa Bannone,Roland Croner,Francesca Dal Mas,Belinda De Simone,Michael Friebe,Francesco Giovinazzo,S. Vincent Grasso,Takeaki Ishizawa,Konrad Karcz,Zain Khalpey,Luca Milone,Nouredin Messaoudi,M. Mahir Ozmen,Peter G. Passias,Niki Rashidian,Sharona Ross,Thomas Schnelldorfer,Amir Szold,Zbigniew Nawrat,Ibrahim Dagher, ,Mohammad Abu Hilal,Fabio Ausania,Elisa Bannone,Elena Bignami,Elie Chouillard,Maria Conticchio,Roland Croner,Ibrahim Dagher,Francesca Dal Mas,Belinda De Simone,Michele Diana,Marcello Di Martino,Mathieu D’Hondt,Gianfranco Donatelli,Ahmed EL Minawi,Michael Friebe,Isabella Frigerio,Michel Gagner,Vonetta George,Suzanne Gisbertz,Francesco Giovinazzo,Luca Gordini,Mustansar Ghanzafar,S. Vincent Grasso,Andrew Gumbs,Takeaki Ishizawa,Konrad Karcz,Stephen Kavic,Zain Khalpey,Michael Kreisel,Luca Milone,Nouredin Messaoudi,Leila Mureebe,Zbigniew Nawrat,Derek O’Reilly,M. Mahir Ozmen,Peter G. Passias,Silvana Perretta,Niki Rashidian,Gianluca Rompianesi,Sharona Ross,Thomas Schnelldorfer,Vivian Strong,Gaya Spolverato,Amir Szold,Martin Teraa,Gratia Tsai,Jordi Vidal-Jove,Karol Rawicz-Pruszyński,Brandon Valencia Coronel,Teodoros Veronesi,Taiga Wakabayashi,Heather Yeo</p><p><p>This Consensus Statement was jointly developed by the Editorial Board Members of <i>Artificial Intelligence Surgery</i> and the Artificial Intelligence Organization for the Next Generation of Surgeons (AIONS). The initiative began in February 2025 and proceeded through iterative drafting of definitions, online meetings, expert subgroup revisions, an online validation survey, an in-person Consensus Conference, and a final online meeting to confirm revisions and review the conference manuscript. Votes greater than or equal to 80 percent were considered validating. Definitions were sought for: (1) Surgery, (2) Endoluminal Surgery, (3) Percutaneous Surgery, (4) Robot, (5) Surgical Robot, (6) Robot-Assisted Surgery, (7) Telemanipulator Surgery, (8) Remote Surgery, (9) Collaborative Robotic (Cobotic) Surgery, (10) Robotic Surgery, (11) Artificial Intelligence Surgery, (12) Surgomics, (13) Surgical Multiomics, (14) Non-Invasive Surgery, (15) Digital Surgery, (16) Computer-Assisted Surgery, and (17) Cybersurgery. All candidate definitions achieved at least 80 percent approval in an online vote prior to the Consensus Conference. The in-person meeting occurred on 26 September 2025 at the Orto Botanico, University of Padova, Italy, where 11 definitions were ratified. The definition of Surgery was deemed premature and invalidated. Surgomics and Surgical Multiomics were determined to be distinct entities and were therefore revoted online after the meeting. Collaborative Robotics was clarified as requiring co-local presence of the surgeon and robot. Definitions for Percutaneous Surgery and Robot were amended and validated during a follow-up online vote on 11 November. Ultimately, all 17 definitions were validated. This Consensus provides terminology, rationale, and strategic direction for the surgical field as artificial intelligence, robotics, and data science reshape surgical practice. Future Consensus Conferences are planned to update definitions as the field evolves.</p></p>]]></content:encoded>
    <dc:title>AIONS Consensus Conference on Definitions of Artificial Intelligence Surgery, Surgomics and Robotics</dc:title>
    <dc:creator>Andrew Gumbs</dc:creator>
    <dc:creator>Michele Diana</dc:creator>
    <dc:creator>Karol Rawicz-Pruszyński</dc:creator>
    <dc:creator>Gaya Spolverato</dc:creator>
    <dc:creator>Isabella Frigerio</dc:creator>
    <dc:creator>Mohammad Abu Hilal</dc:creator>
    <dc:creator>Elisa Bannone</dc:creator>
    <dc:creator>Roland Croner</dc:creator>
    <dc:creator>Francesca Dal Mas</dc:creator>
    <dc:creator>Belinda De Simone</dc:creator>
    <dc:creator>Michael Friebe</dc:creator>
    <dc:creator>Francesco Giovinazzo</dc:creator>
    <dc:creator>S. Vincent Grasso</dc:creator>
    <dc:creator>Takeaki Ishizawa</dc:creator>
    <dc:creator>Konrad Karcz</dc:creator>
    <dc:creator>Zain Khalpey</dc:creator>
    <dc:creator>Luca Milone</dc:creator>
    <dc:creator>Nouredin Messaoudi</dc:creator>
    <dc:creator>M. Mahir Ozmen</dc:creator>
    <dc:creator>Peter G. Passias</dc:creator>
    <dc:creator>Niki Rashidian</dc:creator>
    <dc:creator>Sharona Ross</dc:creator>
    <dc:creator>Thomas Schnelldorfer</dc:creator>
    <dc:creator>Amir Szold</dc:creator>
    <dc:creator>Zbigniew Nawrat</dc:creator>
    <dc:creator>Ibrahim Dagher</dc:creator>
    <dc:creator> </dc:creator>
    <dc:creator>Mohammad Abu Hilal</dc:creator>
    <dc:creator>Fabio Ausania</dc:creator>
    <dc:creator>Elisa Bannone</dc:creator>
    <dc:creator>Elena Bignami</dc:creator>
    <dc:creator>Elie Chouillard</dc:creator>
    <dc:creator>Maria Conticchio</dc:creator>
    <dc:creator>Roland Croner</dc:creator>
    <dc:creator>Ibrahim Dagher</dc:creator>
    <dc:creator>Francesca Dal Mas</dc:creator>
    <dc:creator>Belinda De Simone</dc:creator>
    <dc:creator>Michele Diana</dc:creator>
    <dc:creator>Marcello Di Martino</dc:creator>
    <dc:creator>Mathieu D’Hondt</dc:creator>
    <dc:creator>Gianfranco Donatelli</dc:creator>
    <dc:creator>Ahmed EL Minawi</dc:creator>
    <dc:creator>Michael Friebe</dc:creator>
    <dc:creator>Isabella Frigerio</dc:creator>
    <dc:creator>Michel Gagner</dc:creator>
    <dc:creator>Vonetta George</dc:creator>
    <dc:creator>Suzanne Gisbertz</dc:creator>
    <dc:creator>Francesco Giovinazzo</dc:creator>
    <dc:creator>Luca Gordini</dc:creator>
    <dc:creator>Mustansar Ghanzafar</dc:creator>
    <dc:creator>S. Vincent Grasso</dc:creator>
    <dc:creator>Andrew Gumbs</dc:creator>
    <dc:creator>Takeaki Ishizawa</dc:creator>
    <dc:creator>Konrad Karcz</dc:creator>
    <dc:creator>Stephen Kavic</dc:creator>
    <dc:creator>Zain Khalpey</dc:creator>
    <dc:creator>Michael Kreisel</dc:creator>
    <dc:creator>Luca Milone</dc:creator>
    <dc:creator>Nouredin Messaoudi</dc:creator>
    <dc:creator>Leila Mureebe</dc:creator>
    <dc:creator>Zbigniew Nawrat</dc:creator>
    <dc:creator>Derek O’Reilly</dc:creator>
    <dc:creator>M. Mahir Ozmen</dc:creator>
    <dc:creator>Peter G. Passias</dc:creator>
    <dc:creator>Silvana Perretta</dc:creator>
    <dc:creator>Niki Rashidian</dc:creator>
    <dc:creator>Gianluca Rompianesi</dc:creator>
    <dc:creator>Sharona Ross</dc:creator>
    <dc:creator>Thomas Schnelldorfer</dc:creator>
    <dc:creator>Vivian Strong</dc:creator>
    <dc:creator>Gaya Spolverato</dc:creator>
    <dc:creator>Amir Szold</dc:creator>
    <dc:creator>Martin Teraa</dc:creator>
    <dc:creator>Gratia Tsai</dc:creator>
    <dc:creator>Jordi Vidal-Jove</dc:creator>
    <dc:creator>Karol Rawicz-Pruszyński</dc:creator>
    <dc:creator>Brandon Valencia Coronel</dc:creator>
    <dc:creator>Teodoros Veronesi</dc:creator>
    <dc:creator>Taiga Wakabayashi</dc:creator>
    <dc:creator>Heather Yeo</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.113</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1770854400</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1770854400</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Guideline</prism:section>
    <prism:startingPage>98</prism:startingPage>
    <prism:doi>10.20517/ais.2025.113</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.113</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.26">
    <title>Current innovations in blood flow assessment: the role of 4D-flow MRI and computational fluid dynamics in hepatobiliopancreatic surgery: a systematic review</title>
    <link>https://www.oaepublish.com/articles/ais.2025.26</link>
    <description>&lt;p&gt; &lt;b&gt;Aim:&lt;/b&gt; Insufficient assessment of post-surgical organ perfusion in hepatobiliopancreatic surgery can lead to serious complications. Consequently, various technological solutions have been developed to achieve non-invasive and accurate blood flow assessment. This article aims to evaluate the current state of four-dimensional flow magnetic resonance imaging (4D-flow MRI) and computational fluid dynamics (CFD) technologies in assessing vascular blood flow within this surgical field.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Methods:&lt;/b&gt; A comprehensive literature search using &lt;a href="https://clinicaltrials.gov/"&gt;ClinicalTrials.gov&lt;/a&gt; and PubMed/MEDLINE was performed; articles published between 2015 and 2025 were included. Broad search terms, including “blood flow measurement”, “4D-flow MRI”, or “computational fluid dynamics” and “abdomen” or “liver”, were utilized.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Results:&lt;/b&gt; Twenty-two studies were analyzed in detail. Nineteen focused on vascular conditions surrounding the liver, with 15 assessing venous flow and five evaluating the hepatic artery. Additional hemodynamic features analyzed included blood velocity, pressure, and particle distribution. The clinical applications investigated were: portal vein embolization (1), venous anastomosis (3), liver resection (2), portal hypertension (2), transarterial radioembolization (2), transjugular intrahepatic portosystemic shunt (4), and liver fibrosis (1). Notably, only CFD facilitated the simulation of prospective hemodynamic conditions (2).&lt;/p&gt;&lt;p&gt; &lt;b&gt;Conclusion:&lt;/b&gt; Both 4D-flow MRI and CFD technologies facilitate the accurate study of blood flow dynamics within the supramesocolic compartment. Furthermore, CFD enables the simulation of prospective vascular conditions, establishing its potential as a preoperative planning tool. However, further research is required to fully validate the clinical utility of CFD in this surgical context.&lt;/p&gt;</description>
    <pubDate>1770681600</pubDate>
    <content:encoded><![CDATA[<p><b>Current innovations in blood flow assessment: the role of 4D-flow MRI and computational fluid dynamics in hepatobiliopancreatic surgery: a systematic review</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.26">doi: 10.20517/ais.2025.26</a></p><p>Authors: Carolina González-Abós,Roberto Molina,Sofía Almirante,Mariano Vázquez,Fabio Ausania</p><p><p> <b>Aim:</b> Insufficient assessment of post-surgical organ perfusion in hepatobiliopancreatic surgery can lead to serious complications. Consequently, various technological solutions have been developed to achieve non-invasive and accurate blood flow assessment. This article aims to evaluate the current state of four-dimensional flow magnetic resonance imaging (4D-flow MRI) and computational fluid dynamics (CFD) technologies in assessing vascular blood flow within this surgical field.</p><p> <b>Methods:</b> A comprehensive literature search using <a href="https://clinicaltrials.gov/">ClinicalTrials.gov</a> and PubMed/MEDLINE was performed; articles published between 2015 and 2025 were included. Broad search terms, including “blood flow measurement”, “4D-flow MRI”, or “computational fluid dynamics” and “abdomen” or “liver”, were utilized.</p><p> <b>Results:</b> Twenty-two studies were analyzed in detail. Nineteen focused on vascular conditions surrounding the liver, with 15 assessing venous flow and five evaluating the hepatic artery. Additional hemodynamic features analyzed included blood velocity, pressure, and particle distribution. The clinical applications investigated were: portal vein embolization (1), venous anastomosis (3), liver resection (2), portal hypertension (2), transarterial radioembolization (2), transjugular intrahepatic portosystemic shunt (4), and liver fibrosis (1). Notably, only CFD facilitated the simulation of prospective hemodynamic conditions (2).</p><p> <b>Conclusion:</b> Both 4D-flow MRI and CFD technologies facilitate the accurate study of blood flow dynamics within the supramesocolic compartment. Furthermore, CFD enables the simulation of prospective vascular conditions, establishing its potential as a preoperative planning tool. However, further research is required to fully validate the clinical utility of CFD in this surgical context.</p></p>]]></content:encoded>
    <dc:title>Current innovations in blood flow assessment: the role of 4D-flow MRI and computational fluid dynamics in hepatobiliopancreatic surgery: a systematic review</dc:title>
    <dc:creator>Carolina González-Abós</dc:creator>
    <dc:creator>Roberto Molina</dc:creator>
    <dc:creator>Sofía Almirante</dc:creator>
    <dc:creator>Mariano Vázquez</dc:creator>
    <dc:creator>Fabio Ausania</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.26</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1770681600</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1770681600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Systematic Review</prism:section>
    <prism:startingPage>61</prism:startingPage>
    <prism:doi>10.20517/ais.2025.26</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.26</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.67">
    <title>EnrichGT: a comprehensive R-based tool for functional genomics enrichment analysis based on large language models</title>
    <link>https://www.oaepublish.com/articles/ais.2025.67</link>
    <description>&lt;p&gt; &lt;b&gt;Aim:&lt;/b&gt; We aimed to develop EnrichGT, an open-source and clinician-friendly R package for functional genomics enrichment analysis leveraging large language models (LLMs). The tool addresses major limitations of existing approaches, including semantic redundancy, limited interpretability, and static reporting frameworks, thereby facilitating clinical interpretation and supporting data-driven decision-making.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Methods:&lt;/b&gt; EnrichGT implemented both over-representation analysis and preranked gene set enrichment analysis using multiple knowledge bases. To minimize redundancy, enriched pathways were clustered based on shared genes, emphasizing coherent biological themes. Biological interpretability is further improved by inferring transcription factor activity through CollecTRI (Collection of Transcription Regulation Interactions, &lt;a href="https://github.com/saezlab/CollecTRI"&gt;https://github.com/saezlab/CollecTRI&lt;/a&gt;) and pathway activity via PROGENy (Pathway RespOnsive GENes for activity inference, &lt;a href="https://saezlab.github.io/progeny/"&gt;https://saezlab.github.io/progeny/&lt;/a&gt;). Additionally, context-aware annotations were generated through LLM integration, and results were compiled into dynamic, interactive reports using Quarto.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Results:&lt;/b&gt; EnrichGT streamlines functional genomics enrichment analysis by clustering pathways based on gene co-occurrence, significantly reducing redundancy and enhancing interpretability. When applied to lung adenocarcinoma data from The Cancer Genome Atlas (TCGA), 873 enriched Gene Ontology terms were consolidated into 15 biologically coherent modules, revealing key processes such as myeloid cell activation and tumor-associated angiogenesis. Downstream analysis identified major tumor-associated regulators [CREB1 (cAMP responsive element binding protein 1), RELA/NF-κB p65 (RELA = RELA proto-oncogene, NF-κB = nuclear factor kappa-light-chain-enhancer of activated B cells signaling), HIF1A (hypoxia inducible factor 1 subunit alpha), PPARG (peroxisome proliferator activated receptor gamma), ETS1 (ETS proto-oncogene 1)] and critical signaling axes [TNFα (tumor necrosis factor alpha signaling), NF-κB, hypoxia (oxygen deprivation-related signaling)]. Automated LLM-based annotations and multi-database integration provided complementary pathway insights. Furthermore, EnrichGT’s comparative multi-condition framework revealed conserved and condition-specific biological patterns across datasets, including single-cell ear-canal development and TCGA tumor-stage progression. Its dynamic reporting interface ensured transparent, reproducible, and iterative exploration of enrichment results.&lt;/p&gt;&lt;p&gt; &lt;b&gt;Conclusion:&lt;/b&gt; EnrichGT offered a robust, clinician-friendly solution for functional genomics enrichment analysis, enhancing clinical interpretation and data-driven decision-making.&lt;/p&gt;</description>
    <pubDate>1767657600</pubDate>
    <content:encoded><![CDATA[<p><b>EnrichGT: a comprehensive R-based tool for functional genomics enrichment analysis based on large language models</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.67">doi: 10.20517/ais.2025.67</a></p><p>Authors: Runchen Wang,Zhiming Ye,Qixia Wang,Bo Liang,Nanfei Fu,Wenxi Wang,Huimin Deng,Taimin Zhu,Shangxi Zeng,Yudong Zhang,Shunjun Jiang,Ying Huang,Wenhua Liang,Hengrui Liang,Jianxing He,Xusen Zou</p><p><p> <b>Aim:</b> We aimed to develop EnrichGT, an open-source and clinician-friendly R package for functional genomics enrichment analysis leveraging large language models (LLMs). The tool addresses major limitations of existing approaches, including semantic redundancy, limited interpretability, and static reporting frameworks, thereby facilitating clinical interpretation and supporting data-driven decision-making.</p><p> <b>Methods:</b> EnrichGT implemented both over-representation analysis and preranked gene set enrichment analysis using multiple knowledge bases. To minimize redundancy, enriched pathways were clustered based on shared genes, emphasizing coherent biological themes. Biological interpretability is further improved by inferring transcription factor activity through CollecTRI (Collection of Transcription Regulation Interactions, <a href="https://github.com/saezlab/CollecTRI">https://github.com/saezlab/CollecTRI</a>) and pathway activity via PROGENy (Pathway RespOnsive GENes for activity inference, <a href="https://saezlab.github.io/progeny/">https://saezlab.github.io/progeny/</a>). Additionally, context-aware annotations were generated through LLM integration, and results were compiled into dynamic, interactive reports using Quarto.</p><p> <b>Results:</b> EnrichGT streamlines functional genomics enrichment analysis by clustering pathways based on gene co-occurrence, significantly reducing redundancy and enhancing interpretability. When applied to lung adenocarcinoma data from The Cancer Genome Atlas (TCGA), 873 enriched Gene Ontology terms were consolidated into 15 biologically coherent modules, revealing key processes such as myeloid cell activation and tumor-associated angiogenesis. Downstream analysis identified major tumor-associated regulators [CREB1 (cAMP responsive element binding protein 1), RELA/NF-κB p65 (RELA = RELA proto-oncogene, NF-κB = nuclear factor kappa-light-chain-enhancer of activated B cells signaling), HIF1A (hypoxia inducible factor 1 subunit alpha), PPARG (peroxisome proliferator activated receptor gamma), ETS1 (ETS proto-oncogene 1)] and critical signaling axes [TNFα (tumor necrosis factor alpha signaling), NF-κB, hypoxia (oxygen deprivation-related signaling)]. Automated LLM-based annotations and multi-database integration provided complementary pathway insights. Furthermore, EnrichGT’s comparative multi-condition framework revealed conserved and condition-specific biological patterns across datasets, including single-cell ear-canal development and TCGA tumor-stage progression. Its dynamic reporting interface ensured transparent, reproducible, and iterative exploration of enrichment results.</p><p> <b>Conclusion:</b> EnrichGT offered a robust, clinician-friendly solution for functional genomics enrichment analysis, enhancing clinical interpretation and data-driven decision-making.</p></p>]]></content:encoded>
    <dc:title>EnrichGT: a comprehensive R-based tool for functional genomics enrichment analysis based on large language models</dc:title>
    <dc:creator>Runchen Wang</dc:creator>
    <dc:creator>Zhiming Ye</dc:creator>
    <dc:creator>Qixia Wang</dc:creator>
    <dc:creator>Bo Liang</dc:creator>
    <dc:creator>Nanfei Fu</dc:creator>
    <dc:creator>Wenxi Wang</dc:creator>
    <dc:creator>Huimin Deng</dc:creator>
    <dc:creator>Taimin Zhu</dc:creator>
    <dc:creator>Shangxi Zeng</dc:creator>
    <dc:creator>Yudong Zhang</dc:creator>
    <dc:creator>Shunjun Jiang</dc:creator>
    <dc:creator>Ying Huang</dc:creator>
    <dc:creator>Wenhua Liang</dc:creator>
    <dc:creator>Hengrui Liang</dc:creator>
    <dc:creator>Jianxing He</dc:creator>
    <dc:creator>Xusen Zou</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.67</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1767657600</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1767657600</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Original Article</prism:section>
    <prism:startingPage>18</prism:startingPage>
    <prism:doi>10.20517/ais.2025.67</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.67</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <item rdf:about="https://www.oaepublish.com/articles/ais.2025.68">
    <title>Artificial intelligence and EEG during anesthesia: ideal match or fleeting bond?</title>
    <link>https://www.oaepublish.com/articles/ais.2025.68</link>
    <description>&lt;p&gt;Artificial intelligence (AI) has shown considerable potential in perioperative monitoring, particularly in its application to electroencephalogram (EEG) analysis for assessing the depth of anesthesia. AI methods may enable the dynamic recognition of complex time-frequency EEG patterns and the adaptation of monitoring strategies to patient-specific brain responses. Convolutional neural networks, artificial neural networks, and hybrid deep learning models have reported encouraging results in detecting anesthetic states, estimating bispectral index values, and identifying relevant EEG features - such as alpha-delta shifts or burst suppression - without relying on manual feature engineering. Parallel efforts using virtual and augmented reality platforms suggest possible benefits for anesthesiologist training in EEG interpretation and pharmacologic titration. Despite these advances, important limitations constrain clinical translation. A major challenge is the absence of standardized EEG pattern definitions across anesthetic agents and patient groups, limiting model generalizability. Restricted interoperability between EEG monitors and electronic health records, coupled with proprietary data formats, reduces access to raw EEG signals and hampers large-scale development. Privacy and governance requirements add further barriers to data integration. Methodologically, many studies are affected by insufficient internal validation, suboptimal reporting, and testing in experimental rather than real-world conditions, reducing their translational value. While AI could eventually improve anesthetic precision and safety through EEG-guided approaches, realizing this potential will require transparent algorithms, multicenter and heterogeneous datasets, and robust interoperability and data-sharing standards. Only through such coordinated efforts can these tools evolve from promising research applications into reliable components of routine anesthetic care.&lt;/p&gt;</description>
    <pubDate>1767571200</pubDate>
    <content:encoded><![CDATA[<p><b>Artificial intelligence and EEG during anesthesia: ideal match or fleeting bond?</b></p><p>Cancers <a href="https://www.oaepublish.com/articles/ais.2025.68">doi: 10.20517/ais.2025.68</a></p><p>Authors: Michele Introna,John George Karippacheril,Sara Pilla,Davide Trimarchi,Marco Gemma,Donato Martino,Carla Carozzi</p><p><p>Artificial intelligence (AI) has shown considerable potential in perioperative monitoring, particularly in its application to electroencephalogram (EEG) analysis for assessing the depth of anesthesia. AI methods may enable the dynamic recognition of complex time-frequency EEG patterns and the adaptation of monitoring strategies to patient-specific brain responses. Convolutional neural networks, artificial neural networks, and hybrid deep learning models have reported encouraging results in detecting anesthetic states, estimating bispectral index values, and identifying relevant EEG features - such as alpha-delta shifts or burst suppression - without relying on manual feature engineering. Parallel efforts using virtual and augmented reality platforms suggest possible benefits for anesthesiologist training in EEG interpretation and pharmacologic titration. Despite these advances, important limitations constrain clinical translation. A major challenge is the absence of standardized EEG pattern definitions across anesthetic agents and patient groups, limiting model generalizability. Restricted interoperability between EEG monitors and electronic health records, coupled with proprietary data formats, reduces access to raw EEG signals and hampers large-scale development. Privacy and governance requirements add further barriers to data integration. Methodologically, many studies are affected by insufficient internal validation, suboptimal reporting, and testing in experimental rather than real-world conditions, reducing their translational value. While AI could eventually improve anesthetic precision and safety through EEG-guided approaches, realizing this potential will require transparent algorithms, multicenter and heterogeneous datasets, and robust interoperability and data-sharing standards. Only through such coordinated efforts can these tools evolve from promising research applications into reliable components of routine anesthetic care.</p></p>]]></content:encoded>
    <dc:title>Artificial intelligence and EEG during anesthesia: ideal match or fleeting bond?</dc:title>
    <dc:creator>Michele Introna</dc:creator>
    <dc:creator>John George Karippacheril</dc:creator>
    <dc:creator>Sara Pilla</dc:creator>
    <dc:creator>Davide Trimarchi</dc:creator>
    <dc:creator>Marco Gemma</dc:creator>
    <dc:creator>Donato Martino</dc:creator>
    <dc:creator>Carla Carozzi</dc:creator>
    <dc:identifier>doi: 10.20517/ais.2025.68</dc:identifier>
    <dc:source>Artificial Intelligence Surgery</dc:source>
    <dc:date>1767571200</dc:date>
    <prism:publicationName>Artificial Intelligence Surgery</prism:publicationName>
    <prism:publicationDate>1767571200</prism:publicationDate>
    <prism:volume>6</prism:volume>
    <prism:number>1</prism:number>
    <prism:section>Review</prism:section>
    <prism:startingPage>1</prism:startingPage>
    <prism:doi>10.20517/ais.2025.68</prism:doi>
    <prism:url>https://www.oaepublish.com/articles/ais.2025.68</prism:url>
    <cc:license rdf:resource="CC BY 4.0"/>
  </item>
  <cc:License rdf:about="https://creativecommons.org/licenses/by/4.0/">
    <cc:permits rdf:resource="https://creativecommons.org/ns#Reproduction"/>
    <cc:permits rdf:resource="https://creativecommons.org/ns#Distribution"/>
    <cc:permits rdf:resource="https://creativecommons.org/ns#DerivativeWorks"/>
  </cc:License>
</rdf:RDF>
