I am pleased to announce the Clinical Trial Risk Tool, which is now open to the public to use.
The tool is available at https://clinicaltrialrisk.org/tool.
Screenshot of the tool
The tool consists of a web interface where a user can upload a protocol in PDF or Word format, and ultimately a number of features were extracted, such as number of subjects, statistical analysis plan, effect size, number of countries, etc.
The tool which can estimate the risk of HIV and TB trials ending uninformatively and will soon be extended to cover other metrics such as trial complexity and cost.
The NLP model was developed as an ensemble of components which extracted different aspects of information from the text, including rule-based (hand-coded) and neural network designs.
The model’s output features were then condensed down via a clinical trials risk model which ultimately produces a three-level risk traffic light score. The full analysis can be exported as XLSX or PDF.
Guest post by Safeer Khan, Lecturer at Department of Pharmaceutical Sciences, Government College University, Lahore, Pakistan Introduction The success of a clinical trial is strongly dependent on the structure and coordination of the teams managing it. Given the high stakes and significant impact of every decision made during the trial, it is essential for each team member to collaborate efficiently in order to meet strict deadlines, comply with regulations, and ensure reliable results.
Guest post by Youssef Soliman, medical student at Assiut University and biostatistician Clinical trial protocols are detailed master-plans of a study – often 100–200 pages long – outlining objectives, design, procedures, eligibility and analysis. Reading them cover-to-cover can be daunting and time-consuming. Yet careful review is essential. Protocols are the “backbone” of good research, ensuring trials are safe for participants and scientifically valid [1]. Fortunately, there are systematic strategies to speed up review and keep it objective.
Introduction People have asked us often, how was the Clinical Trial Risk Tool trained? Does it just throw documents into ChatGPT? Or conversely, is it just an expert system, where we have painstakingly crafted keyword matching rules to look for important snippets of information in unstructured documents? Most of the tool is built using machine learning techniques. We either hand-annotated training data, or took training data from public sources. How We Trained the Models inside the Clinical Trial Risk Tool The different models inside the Clinical Trial Risk tool have been trained on real data, mostly taken from clinical trial repositories such as clinicaltrials.