Page 67 - Read Online
P. 67

Page 2 of 5           MacLennan et al. Mini-invasive Surg 2023;7:23  https://dx.doi.org/10.20517/2574-1225.2023.66

               In research, an outcome is broadly defined as a measurement or observation used to capture and assess the
               effect of a treatment or intervention, such as the assessment of side effects (risks) or effectiveness
                       [1]
               (benefits) . When  designing  studies  of  treatment  risks  and  benefits,  such  as  randomised  trials,
               observational studies, or a platform for big data analysis, research teams need to know and be able to
               succinctly communicate what is known so far about these outcomes. Likewise, when clinical practice
               guideline panels provide treatment/care management recommendations, they ought to do so based on a
               balanced consideration of the risks and benefits of treatments/interventions, with patient preferences .
                                                                                                    [2]

               For both of these circumstances, designing new research and making treatment recommendations, to know
               the totality of the existing evidence base requires effort from various experts and stakeholders, such as
               clinicians, researchers, and patient advocates. Systematic reviews and meta-analyses are key research
               methods to address this. Typically, systematic reviews utilise deductive reasoning and set out to answer an a
               priori research question with strict inclusion and exclusion criteria; then, data are extracted from the
                                                                                                     [3,4]
               included studies on baseline characteristics and outcomes, aiming to minimise bias and random error . If
               there is sufficient similarity in the populations, measurements, and definition of the outcome across studies,
               then the outcome may be amenable to meta-analysis, which is a statistical technique whereby estimates
               from more than one study are combined and can often give more power and precision than individual
                                             [4]
               estimates from any one study alone .
               However, outcome reporting heterogeneity is a frequent problem when systematically reviewing and meta-
               analysing an evidence base. Outcome reporting heterogeneity refers to the interrelated problems of
               inconsistency (different outcomes reported in different studies) and variability (same outcomes reported
               across studies but defined and/or measured differently) . This heterogeneity may be further exacerbated
                                                               [1,5]
               by selective outcome reporting, whereby the choice of outcomes to report is based on their statistical
               significance or some other post-hoc decision .
                                                     [6]
               Outcome reporting heterogeneity exists within the renal cancer treatment effectiveness evidence, as
               exemplified in comprehensive systematic reviews of outcomes comparing various treatments for localised
                                                             [7]
               renal cancer and reporting on oncological outcomes , perioperative outcomes , and quality of life . In
                                                                                                     [8,9]
                                                                                   [8]
               the limitation sections of each of these reviews, the authors noted that they found it difficult to compare
               results across studies due to outcome reporting inconsistency and/or variability, and meta-analyses were
                                                                                      [10]
               either not possible or limited in scope. Instead, they mostly used narrative synthesis  to describe the data.
               This resulted in unwieldy data tables and inefficient textual summaries, which were burdensome to prepare
               and difficult to communicate. This, in turn, hampers the guideline-making process when expert panels try
               to make sense of the evidence and offer actionable recommendations.

               A recent systematic review focusing on outcome reporting heterogeneity in renal cancer describes this
               phenomenon for overall survival, adverse events, and quality of life . The use of different measurement
                                                                          [11]
               start and end times for calculating both overall survival and cancer-specific survival makes it difficult to
               combine the data and provide a critical and concise summary of them. Adverse event reporting used three
               different approaches: the standardised Clavien-Dindo system  (focusing on the consequence of the event,
                                                                   [12]
               e.g., requiring further medical treatment), simple lists of events, and “trifecta” or “pentafecta” outcomes
               (each representing a composite outcome that is also prone to heterogeneity in terms of meeting the criteria
               for it). This variability means it is not possible to directly compare adverse events across studies, nor is it
               possible to meta-analyse the data. Even available reverse coding lists for Clavien-Dindo do not solve this
               problem since they represent an unreasonable additional workload, and the heterogeneity of the source data
               still exists. Quality of life represents a particularly critical field: only three of 143 studies reported quality of
   62   63   64   65   66   67   68   69   70   71   72