Posted by Sam Fenny - Memes and headline comments by David Icke Posted on 1 June 2024

An Urgent Call for Transparency in Clinical Trials

Well-conducted clinical trials are essential for us to advance medical care. Full disclosure of the results will improve the public’s confidence in the effectiveness of new treatments, improve participation in clinical trials and benefit all of us – but only if they really give a true picture of the results and what they actually mean.

In 2022, the Medicines & Healthcare products Regulatory Agency (MHRA) put out a consultation related to the legislation around clinical trials. The original clinical trials regulations were introduced in 2004 primarily to improve patient safety and ensure that data obtained from the trials was both reliable and of good quality . However, they were rather burdensome, didn’t really differentiate between small-scale academic trial and large-scale Big Pharma trials and simply made things more complicated than they needed to be; a change was long overdue.

At the time of their introduction, I found it frustrating that regulations imposed by the EU were foisted upon the U.K., and we had no power to write our own legislation that may have been more appropriate to our situation, perhaps allowing some flexibility for the small, ground-breaking clinical studies, led by academics, that I was involved in.

An update to the regulations was written by the EU in 2014. This new regulation (Regulation 536/2014) was more pragmatic, simplifying some of the bureaucracy, improving transparency (by including a public database and layperson summaries) and making provisions to support academic and non-commercial trials. But, in a twist of irony, due to the delay in implementing these changes and the U.K.’s departure from the EU during that time, this regulation was never written into U.K. law. However, regulation 536/2014 was introduced in the EU, thereby offering the EU a distinct R&D advantage over the U.K. What should have been an advantage of Brexit turned into a disadvantage – I hear the Remainers’ jeers!

The MHRA’s consultation seemed to be, at least in part, in response to this – not the jeers of the Remainers but the U.K.’s lack of competitiveness in drug discovery. It asked many questions relating to the existing legislation and proposed legislative changes. The proposed changes are generally welcomed, although the delay in introducing them into U.K. law is not. A comment piece this month in Nature Medicine looks at one change, which was very much supported by the people who replied to the consultation, but which may end up being quite controversial and over which I have some concerns.

The change will introduce a requirement “to offer trial findings with participants in a suitable format or explain why this is not possible”. Part of the problem, as the authors of the comment piece point out, is that it is not clear how best to implement this policy. The authors set out to propose a framework for what information should be summarised for participants, how results should be summarised and when. For example, they suggest that the information should answer these four questions: what question did the trial set out to answer? what did the trial find? what effect have the trial results had, if any, in changing treatment or prevention? and where can the participants find out more?

In the article, the authors make reference to the six-page summary provided to participants involved in the RECOVERY trial. For those not familiar with the trial, it was a large ‘Randomised Evaluation of COVID-19 Therapy’ carried out in the U.K. to try to find effective treatments for COVID-19. However, we need to be careful when we summarise the findings of such studies because the information omitted is just as telling as the information provided, and unless all the information is freely accessible for scrutiny, the information provided can be more misleading than providing no information at all.

If we take the case of hydroxychloroquine used in the RECOVERY trial, the summary says quite clearly that there was “No clinical benefit from use of hydroxychloroquine in hospitalised patients with COVID-19.” The evidence from the study seems to support this, but, of course, the summary fails to mention that hydroxychloroquine was never expected to be effective in patients already hospitalised with severe COVID-19; it was thought that it would most likely be useful in the early stages of COVID-19. If it had been administered earlier, in an outpatient setting, at a reasonable dose, the results may have been quite different.

An Italian study investigated early intervention in around 400 patients in early stages of COVID-19 from November 2020 to March 2021. Doctors were able to select from a range of drugs and to choose those most appropriate for individual patients. This included the use of hydroxychloroquine (200mg twice a day for seven days) usually in cases where ivermectin was not available and where hydroxychloroquine was not contraindicated (this was in approximately 30% of patients in the study). Patients in the early stages of the illness who took part in the study, seldom (8% in phase 2a) or very rarely (4.6% and 1.6% in phase 1 or phase 0 respectively) required hospital admission. The main result of the study concerned the overall mortality; only one patient died from COVID-19 (a lethality of 0.2% compared with national statistics indicating a lethality of around 3% for similar patients). So, the early intervention appeared to be effective, but the complex nature of the study made statistical evaluation difficult.

What the summary also doesn’t say is that the dose of hydroxychloroquine used in the RECOVERY trial was much higher than standard treatment. For the treatment of malaria, the standard regimen may be something like 800mg initially followed by a further 400mg dose six hours later, then another 400mg 24 hours following the initial dose and another 400mg 48 hours after the initial dose (a total of 2000mg over 3 days). In the RECOVERY trial, patients received 800mg initially then 800mg 6 hours later, followed by a further 400mg six hours later, followed by a dose of 400mg every 12 hours after that for the next nine days. So, participants received a whopping 2400mg within the first 24 hours of treatment. This is clearly going to affect the adverse event profile and impact patient outcomes. Contrast this with the Italian study where 200mg was given twice a day for seven days in the early stages of the disease.

On reading the summary of the RECOVERY trial relating to hydroxychloroquine, you would be forgiven for thinking that hydroxychloroquine had been shown to be of no use in the treatment of COVID-19. In fact, the UKRI headline summary simply states “Hydroxychloroquine has no clinical benefits.” The situation of hydroxychloroquine (and similarly ivermectin) were so highly politically charged that fair reporting of studies was practically impossible. The summary presented was quite misleading in this case, and I can imagine many similar scenarios where the summary would fail to tell the whole story.

However, this is just one problem. Another problem relates to how the data is presented, since it is easy to show one thing when the actual data shows something different. To really understand these problems, I’d suggest reading Prof. Norman Fenton’s paper on the Simpson Paradox. In this paper, he clearly shows how results can be presented, very plausibly, in such a way to show that a drug is more effective than a placebo, even when it isn’t. Similarly, if you want to see his detailed analysis of the illusion of the efficacy of the COVID-19 vaccine and other similar data, then I suggest looking at Prof. Fenton’s website.

A further problem touched on in the paper relates to when the results should be made available. Some might say “as early as possible”, but I see problems with this. Firstly, this doesn’t give an opportunity to fully investigate the long-term safety and efficacy of a new treatment. Secondly, to really understand the full impact of a study, it may take many years. For example, if you are looking at the outcome of a certain cancer treatment, you may need to wait 5-10 years (or more) to really see if one drug was more effective than another (time to relapse, for example, will hopefully be many years ahead or not at all if the drug is really good). So, presenting results early (which happens all the time in the published literature for obvious reasons) may give an untrue picture of how good the drug really is.

Read More: An Urgent Call for Transparency in Clinical Trials

The Dream

From our advertisers