Geraghty in the year 2016, outlines a range of controversies surrounding publication of results from the PACE trial and discusses a freedom of information case brought by a patient refused access to data from the trial. The PACE authors offer a response, writing ‘Dr Geraghty’s views are based on misunderstandings and misrepresentations of the PACE trial’. This article draws on expert commentaries to further detail the critical methodological failures and biases identified in the PACE trial, which undermine the reliability and credibility of the major findings to emerge from this trial.

Trial management Edwards (2017) notes that PACE is an unblinded trial (for participants and perhaps researchers), each treatment did not have a comparable placebo/control and there are clear biases in how treatments were administered: for example, occupational therapists (OTs) provided an adapted pacing therapy (APT) that is not a formal treatment used by OTs, but a model of pacing crafted by PACE authors; while cognitive behavioural therapy (CBT) therapists provided a familiar therapy, CBT. Goudsmit et al. (2017) affirm that the version of ‘pacing’ administered in PACE does not reflect the type of pacing patients with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) undertake. This is a critical point. Participants in CBT, graded exercise therapy (GET), standard medical care (SMC) and APT were given starkly different treatments, not the same treatment compared with a blind ‘sugar pill’ placebo. Offering different treatments, and using different types of therapists, induces so much ‘variability’, it breaks a fundamental tenet of a randomised controlled trial (RCT), which is to standardise procedures and observe variance in outcomes between cases and controls. Contamination occurred in this trial by allowing therapists from all arms (APT, CBT, GET) to communicate with each other about how patients were doing in each group, in fact the trial manual encouraged it (PACE Trial APT Manual, Queen Mary University of London (QMUL), 2016), and the lead authors issued material to participants and therapists mid-trial hinting that the CBT and GET groups were doing better. White et al. (2017) suggest Geraghty (2016) did not specify the trial procedures that were neglected or bypassed in PACE. For clarity, these are (1) altering outcome measures mid-trial with poor justification; (2) sending newsletters to participants mid-trial, reporting the positive progress of CBT and GET participants (contaminating the trial); (3) not altering the inclusion criteria for entry into the trial after the main outcomes measures were lowered – meaning 13 per cent met some of the criteria needed to be deemed recovered at trial entry point; and (4) not informing participants of certain conflicts of interest the lead authors hold (detailed below). The PACE authors point out that their trial had oversight from an ethics committee, an independent trial steering committee and data monitoring ethics committee; and all publications from the trial were peer reviewed. This begs the question how were such procedural anomalies accepted by these oversight bodies? The trial team have not supplied details of communications with each oversight body, thus some uncertainties remain about how and when changes were made and approved.

Data access case and patient community response White et al. (2017) write, ‘We reject the accusation that our “actions have arguably caused distress to patients,” for which Dr Geraghty offers no evidence’. However, patients have expressed anger concerning the actions of the PACE trial team in relation to the trial and a freedom of information case brought by one patient (Mr Matthees) denied access to data from PACE. Peter White’s (lead investigator) host, QMUL, assembled a legal team at a cost of over £220,000, to challenge Mr Matthees’ right to access data from the trial. This action caused consternation among the patient community. The European ME Alliance called for release of data from PACE (ME Action, 2016), over 12,000 people signed a petition (ME Action, 2017) and a letter with over 120 signatures from scientists and patient organisations has called on a journal to retract a PACE recovery paper (Sharpe et al., 2015); while a similar letter has called on the Lancet to independently verify the PACE trial’s evidence (Tuller, 2017). The PACE authors assert that patients want CBT and GET, that 65 per cent of respondents want CBT and 45 per cent want GET available in the National Health Service (NHS; citing a patient survey, Action for ME, 2011). What the PACE authors do not quote from the same survey is that 93 per cent of respondents said they want fatigue or condition management, 94 per cent want medication for sleep and pain and 90 per cent want pacing treatments. Pacing, the approach the PACE authors suggest is an inferior treatment, actually has a much higher approval rating than CBT or GET. Kirke (2017) highlights a mass of patient survey evidence the PACE authors fail to reference. In a large survey conducted by the ME Association (MEA, 2015), 84 per cent of respondents rated pacing as appropriate to their needs, 44 per cent CBT and just 22 per cent GET. In the same survey, CBT resulted in 91 per cent of participants feeling their ME/CFS symptoms were unaffected or made worse and 74 per cent of patients reported that GET made their symptoms worse. Kindlon (2017) notes that outside the confines of highly controlled clinical trials, patients continually report significant negative outcomes after taking GET. Laws (2017) points out that evidence from clinical trials is given more credence than patient surveys, even if patients report negative outcomes, with harms inadequately studied in clinical trials and in clinical practice.

Conflicts of interest Geraghty (2016) raises a view that in large clinical trials, such as the PACE trial, whereby millions of pounds of tax payers’ money is being spent on testing the efficacy of treatments that could potentially shape health policy and clinical practice, funders should look to involve the most ‘independent-minded’ assessors as is feasibly possible. The PACE authors write ‘We reject the suggestion that the fact that we use these therapies for our patients and have tested them in previous trials is “a major source of investigator bias”’ (White et al., 2017). Tuller (2017) and Edwards (2017) suggest that that the PACE trial team held a wide range of conflicts of interest that were not fully disclosed to trial participants. For instance, trial lead Peter White was an advisor to the Department of Work and Pensions (DWP) at the time the PACE authors applied for funding for the trial from the DWP (Faulkner, 2016). Both White and Sharpe have done paid consultancy work for re-insurance companies with an interest in ME/CFS claims exposure. Sharpe offers expert opinion in one insurance document describing the need to promote CBT in health care (UnUm, 2002). In addition, trial authors White and Chalder were registered as directors of a private company called ‘One Health’ (Companies House reg. 04364122) during the PACE trial – this company reportedly promoted the use of a biopsychosocial model with associated CBT and GET treatments. PACE leads White and Chalder have also published popular books promoting the benefits of CBT and GET. A null result in the PACE trial, that CBT or GET might not be effective treatments for CFS, would refute many of the claims the lead authors so strongly made in their academic and private sector work. We may only speculate how such clear treatment allegiances and investigator biases impacted the PACE trial (Lubet, 2017).

Recovery measurement PACE fails to demonstrate sizable improvements across objective tests of physical functioning (Kindlon, 2017; Shepherd, 2017; Tuller, 2017; Vink, 2017; Wilshire, 2017). Thus, what is ‘recovery’ if patients remain substantially functionally impaired? The PACE authors use an ‘operational definition of recovery’. This involves complex four-strand criteria where participants have to (1) score 60 or above on an SF-36 function subscale, (2) score 18 or below on a fatigue scale, (3) report improvement in overall symptoms and (4) no longer meet the Oxford criteria for CFS (clinically assessed by PACE team members). Wilshire (2017) and Vink (2017) detail how such ‘composite measures’ may appear comprehensive, but largely rest on subjective accounts – ticking boxes on a survey instrument or Likert scale with limited choices (feeling better vs feeling very much better). In PACE, modest improvements observed in the CBT and GET groups (contested by reanalysis) are not mirrored by substantive changes in objective measures of walking ability on a 6-minute walking test or step test (McPhee, 2017). Adding CBT to SMC did not substantially improve function from baseline (McPhee, 2017). In addition, the PACE authors dropped plans to assess patients’ physical activity using electronic monitors (actometers) on the grounds they were too burdensome. Other measurements of physical function were not considered, such as measuring how many hours per day a participant spends upright, or in bed, or laying down (pre- and post-treatment). In addition, there is almost no change in secondary measures (employment or health care use) in CBT or GET groups (McCrone et al., 2012). Such data suggest recovery in PACE is more a design artefact than a clinical reality. Stouten (2017) details how a reliance on subjective measures results in a confirmation bias in PACE: ‘the more objective the measure, the worse results are for CBT and GET’. Confirmation biases spill over into reporting biases. An editorial by Bleijenberg and Knoop (2011) that accompanied PACE Lancet publication stated, ‘… the recovery rate of cognitive behaviour therapy and graded exercise therapy was about 30%’. In fact, the PACE authors reported a 22 per cent recovery rate 2 years later in 2013 (White et al., 2013). Media outlets picked up the PACE trial following press briefings by the PACE authors, with headlines that ‘CFS sufferers can overcome symptoms of ME with positive thinking and exercise’ (Knapton, 2015). It is arguable the PACE authors’ use of the term ‘recovery’ contributed to a perception that CBT and GET are curative treatments (Goudsmit, 2017), yet the majority of participants within the 22 per cent PACE reported recovery rate did not reach a SF-36 physical function threshold of above 85 (the level of a healthy individual). Recovery in PACE rested on subjective self-report, in a study that sought to get patients to think ‘more positively’, with little improvement in objective measures or secondary outcomes.

Lessons versus moving on Petrie and Weinmann (2017) claim the PACE authors have suffered unnecessary harassment and that a continual focus on the PACE trial is unfair, that ‘it is time to move beyond PACE’. However, an Information Tribunal found little evidence of harassment and lead PACE author Trudie Chalder confirmed this. Petrie and Weinmann (2017) should be aware that science progresses from a recognition of error and failures. There are many valuable lessons to be learned from a review of the PACE trial. The PACE authors refusal to share data with requesters exemplifies a clear need for data-sharing rules. Only after a Tribunal ordered data to be released, were other researchers able to assess the PACE authors’ improvement and recovery claims (e.g. Wilshire et al., 2017). It is important clinicians and health authorities are made aware of the biased methods, outcome switching, conflicts of interest, and fall in recovery and improvement rates following reanalysis, particularly if the PACE trial is to form part of the evidence-base that guides ME/CFS treatment recommendations (NICE or NHS).

Criticism versus validation The PACE authors claim that they have adequately responded to criticisms about the PACE trial. It is important to remember that the last stop on a scientific papers’ journey to ‘acceptance’ is the public and wider scientific community – here the PACE authors have failed to be convincing. The majority of invited expert commentaries in the Journal of Health Psychology echo the concerns raised in Geraghty (2016). The PACE authors suggest the National Institute of Clinical Excellence, NHS Choices and the Lancet, all agree that PACE offers the ‘best evidence’ that CBT and GET are safe and effective treatments for CFS (White et al., 2017). This claim demonstrates the import of this trial and consequently the import of reanalysis of trial outcomes and recent critiques of the trial. Most health authorities do not have time to scrutinise the methods and conduct of every clinical trial, this is a role the scientific community (and increasingly stakeholders) take on. The PACE authors suggest their findings are ‘good news for patients’. However, recent reanalysis of findings, special commentaries and this article, arguably offer patients more accurate information about the limited benefits of CBT and GET as treatments for ME/CFS.

Data transparency There is a contemporary movement for transparency in science, particularly in clinical trials. The PACE authors state they support this principle, yet they withheld data from interested parties for many years. They write, ‘This is an ethical position, respecting patients’ rights …’ (White et al., 2017). However, did PACE trial participants really ask for scientific data not to be shared, or did participants simply ask that no personal identifiable information (PIIs) be disclosed? – The latter seems more plausible. The Information Tribunal ruled that the sharing of data from PACE is in the public interest (HMTS, 2016). However, data from PACE continues to be withheld from requesters. To build trust in science and to enhance the power of data, it is important data from clinical trials are made more openly available. PACE is a publicly funded trial (almost £5 million); it would be unreasonable to require other researchers to replicate such a trial. Funding bodies must make data-sharing a requirement of any research grant in future.

Conclusion Patients and clinicians deserve reliable information regarding ‘best evidence’. PACE is a controversial trial that does not stand up well to close scrutiny. The majority of participants in the trial did not recover and the majority were not substantially functionally improved. Participants in PACE were drawn from milder cases, with more severe cases excluded. Reanalysis of part of the trial data suggests the benefits of CBT and GET were over-stated, the result of changes to the trial protocol. Most of the modest benefit reported in PACE rests on subjective accounts of improvement. Findings from the trial have been terminally damaged by the way in which the trial was conducted, with a lack of care for treatment fidelity and contamination – so much so, that doctors, commissioners and patients can have little faith in the outcomes reported. It may be that the best thing to emerge from the PACE trial will be an impetus to improve the way in which trials are funded, conducted, overseen and reported – with data being made available for reanalysis in future. Evidence from PACE suggests that CBT and GET are not curative treatments for CFS; recovery rates are low using these treatments. The PACE trial is a seminal contemporary example of how ‘evidence’ is a fluid construct – not an absolute. Trial outcomes are shaped by trial design choices, thus it is imperative ‘evidence’ is interpreted with appropriate caution and data from trials is accessible. There is a clear need for more research in ME/CFS, particularly better understanding of illness aetiology, pathogenesis and pathophysiology.

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding

The author(s) received no financial support for the research, authorship and/or publication of this article.