The Clinical Trial Risk Tool has been featured in a guest column in Clinical Leader, titled A Tool To Tackle The Risk Of Uninformative Trials, in cooperation with Abby Proch, Executive Editor at Clinical Leader.
In the article, Thomas Wood of Fast Data Science highlights the problem of “uninformative” clinical trials – those that don’t provide meaningful results, even if the drug being tested is effective or ineffective. He distinguishes these from simply “failed” trials and emphasises the ethical and financial waste they represent. Wood explains that while “uninformativeness” lacks a formal definition, it can be understood by examining the five conditions of an “informative” trial as outlined by Zarin, Goodman, and Kimmelman (2019): addressing an important question, meaningful design, feasibility, scientific validity, and timely, accurate reporting. Trials excluded from meta-analyses due to bias are often considered uninformative.
Wood describes how the Clinical Trial Risk Tool tackles this problem by assessing trial protocols against these criteria. He suggests expanding the tool to include a template clinical trial budget derived from real-world cost data (e.g., Sunshine Act disclosures). Further enhancements could include identifying endpoints and inclusion/exclusion criteria, then searching clinical trial registries (like ClinicalTrials.gov) for similar past trials to help users evaluate their planned trial’s design choices.
Wood also suggests tailoring the tool for different user profiles (patient advocates, financial planners, medical professionals) by providing personalised feedback and recommended actions for protocol improvement. The goal is not to replace human review, but to help users identify design gaps and high-risk indicators early in the process.
Fast Data Science is a leading data science consultancy firm providing bespoke machine learning solutions for businesses of all sizes across the globe, with a concentration on the pharmaceutical and healthcare industries.
Guest post by Safeer Khan, Lecturer at Department of Pharmaceutical Sciences, Government College University, Lahore, Pakistan Introduction The success of a clinical trial is strongly dependent on the structure and coordination of the teams managing it. Given the high stakes and significant impact of every decision made during the trial, it is essential for each team member to collaborate efficiently in order to meet strict deadlines, comply with regulations, and ensure reliable results.
Guest post by Youssef Soliman, medical student at Assiut University and biostatistician Clinical trial protocols are detailed master-plans of a study – often 100–200 pages long – outlining objectives, design, procedures, eligibility and analysis. Reading them cover-to-cover can be daunting and time-consuming. Yet careful review is essential. Protocols are the “backbone” of good research, ensuring trials are safe for participants and scientifically valid [1]. Fortunately, there are systematic strategies to speed up review and keep it objective.
Introduction People have asked us often, how was the Clinical Trial Risk Tool trained? Does it just throw documents into ChatGPT? Or conversely, is it just an expert system, where we have painstakingly crafted keyword matching rules to look for important snippets of information in unstructured documents? Most of the tool is built using machine learning techniques. We either hand-annotated training data, or took training data from public sources. How We Trained the Models inside the Clinical Trial Risk Tool The different models inside the Clinical Trial Risk tool have been trained on real data, mostly taken from clinical trial repositories such as clinicaltrials.