On 8 October, Thomas Wood of Fast Data Science presented the Clinical Trial Risk Tool, along with the Harmony project, at the AI and Deep Learning for Enterprise (AI|DL) meetup sponsored by Daemon. You can now watch the recording of the live stream on AI|DL’s YouTube channel below:
The Clinical Trial Risk Tool leverages natural language processing to identify risk factors in clinical trial protocols. The Clinical Trial Risk Tool is online at https://clinicaltrialrisk.org/tool.
Artificial Intelligence and Deep Learning for Enterprise is a meetup group in London dedicated to talks from people in the industry using developments in AI for exciting real world applications.
We initially developed the Clinical Trial Risk Tool to identify risk factors in HIV and TB protocols. Version 2 is coming soon, which will also make cost predictions (i.e. predict the cost of running a trial in dollars), and which will also cover further disease areas, such as Enteric and diarrheal diseases, Influenza, Motor neurone disease, Multiple sclerosis, Neglected tropical diseases, Oncology, COVID, Cystic fibrosis, Malaria, and Polio.
The project has been funded by the Bill and Melinda Gates Foundation and we have published a technical paper in the journal Gates Open Research:
The software is under MIT License, meaning that it is open source, and can be freely used for other purposes, both commercial and non-commercial, with no restrictions attached. The source code is on Github at https://github.com/fastdatascience/clinical_trial_risk.
[Fast Data Science]](https://fastdatascience.com/) is a leading data science consultancy firm providing bespoke machine learning solutions for businesses of all sizes across the globe, with a concentration on the pharmaceutical and healthcare industries.
Guest post by Safeer Khan, Lecturer at Department of Pharmaceutical Sciences, Government College University, Lahore, Pakistan Introduction The success of a clinical trial is strongly dependent on the structure and coordination of the teams managing it. Given the high stakes and significant impact of every decision made during the trial, it is essential for each team member to collaborate efficiently in order to meet strict deadlines, comply with regulations, and ensure reliable results.
Guest post by Youssef Soliman, medical student at Assiut University and biostatistician Clinical trial protocols are detailed master-plans of a study – often 100–200 pages long – outlining objectives, design, procedures, eligibility and analysis. Reading them cover-to-cover can be daunting and time-consuming. Yet careful review is essential. Protocols are the “backbone” of good research, ensuring trials are safe for participants and scientifically valid [1]. Fortunately, there are systematic strategies to speed up review and keep it objective.
Introduction People have asked us often, how was the Clinical Trial Risk Tool trained? Does it just throw documents into ChatGPT? Or conversely, is it just an expert system, where we have painstakingly crafted keyword matching rules to look for important snippets of information in unstructured documents? Most of the tool is built using machine learning techniques. We either hand-annotated training data, or took training data from public sources. How We Trained the Models inside the Clinical Trial Risk Tool The different models inside the Clinical Trial Risk tool have been trained on real data, mostly taken from clinical trial repositories such as clinicaltrials.