We have developed a tool allowing researchers to analyse HIV and TB Clinical Trial Protocols and identify risk factors using Natural Language Processing. The tool allows a user to upload a clinical trial protocol in PDF format, and the tool will generate a risk assessment of the trial. You can find example protocols by searching on ClinicalTrials.gov.
The tool allows a user to upload a trial protocol in PDF format. The tool processes the PDF into plain text and identifies features which indicate high or low risk of uninformativeness.
At present the tool supports the following features:
The features are then passed into a scoring formula which scores the protocol from 0 to 100, and then the protocol is flagged as HIGH, MEDIUM or LOW risk.
The Protocol Analysis Tool runs on Python and requires or uses the packages Plotly Dash, Scikit-Learn, SpaCy and NLTK. The tool runs as a web app in the user’s browser. It is developed as a Docker container and it has been deployed to the cloud as a Microsoft Azure Web App.
PDFs are converted to text using the library Tika, developed by Apache.
All third-party components are open source and there are no closed source dependencies.
A list of the accuracy scores of the various components is provided here.
Download this repository from the Github link as in the below screenshot, and unzip it on your computer
Alternatively if you are using Git in the command line,
Now you have the source code. You can edit it in your favourite IDE, or alternatively run it with Docker:
front_end
. Run the command: docker-compose upEach parameter is identified in the document by a stand-alone component. The majority of these components use machine learning but three (Phase, Number of Subjects and Countries) use a combined rule-based + machine learning ensemble approach. For example, identifying phase was easier to achieve using a list of key words and phrases, rather than a machine learning approach.
The default sample size tertiles were derived from a sample of 21 trials in LMICs, but have been rounded and manually adjusted based on statistics from ClinicalTrials.gov data.
The tertiles were first calculated using the training dataset, but in a number of phase and pathology combinations the data was too sparse and so tertile values had to be used from ClinicalTrials.gov. The ClinicalTrials.gov data dump was used from 28 Feb 2022.
Future development work on this project could include:
We have identified the potential for natural language processing to extract data from protocols at BMGF. Both machine learning and rule-based methods have a huge potential for this problem, and machine learning models wrapped inside a user-friendly GUI make the power of AI evident and accessible to stakeholders throughout the organisation.
With the protocol analysis tool, it is possible to explore protocols and systematically identify risk factors very quickly.
Guest post by Safeer Khan, Lecturer at Department of Pharmaceutical Sciences, Government College University, Lahore, Pakistan Introduction The success of a clinical trial is strongly dependent on the structure and coordination of the teams managing it. Given the high stakes and significant impact of every decision made during the trial, it is essential for each team member to collaborate efficiently in order to meet strict deadlines, comply with regulations, and ensure reliable results.
Guest post by Youssef Soliman, medical student at Assiut University and biostatistician Clinical trial protocols are detailed master-plans of a study – often 100–200 pages long – outlining objectives, design, procedures, eligibility and analysis. Reading them cover-to-cover can be daunting and time-consuming. Yet careful review is essential. Protocols are the “backbone” of good research, ensuring trials are safe for participants and scientifically valid [1]. Fortunately, there are systematic strategies to speed up review and keep it objective.
Introduction People have asked us often, how was the Clinical Trial Risk Tool trained? Does it just throw documents into ChatGPT? Or conversely, is it just an expert system, where we have painstakingly crafted keyword matching rules to look for important snippets of information in unstructured documents? Most of the tool is built using machine learning techniques. We either hand-annotated training data, or took training data from public sources. How We Trained the Models inside the Clinical Trial Risk Tool The different models inside the Clinical Trial Risk tool have been trained on real data, mostly taken from clinical trial repositories such as clinicaltrials.