Finding The Limits: AI and the TMF
Ask any industry clinical research leader about the technologies most likely to digitally transform the future of clinical research, and they will undoubtedly talk about artificial intelligence (AI). Any mention of AI, though, inspires a spirited debate. Some fear that AI will take our jobs, or that AI is unsafe. Others say AI will drive a utopian future and give us freedom from monotonous work. Almost everyone has a strong opinion, and yet facts and plain-language analysis are in short supply. So what is AI? What are its limits? Is AI something we should strive to make part of our clinical trials, our TMFs, and maybe even our everyday lives?
What is AI?
While there is no universally agreed upon definition of AI, a commonly cited definition is, “The theory and development of computer systems able to perform tasks that normally require human intelligence.” This definition leaves much to individual interpretation. Firstly, defining intelligence quickly spirals into deep philosophical discussion. Secondly, because of the great diversity of humanity, deciding what activities are indicative of human intelligence is fraught with challenges. Because intelligence and AI technology are constantly in flux, no characterization of AI is ever fully complete. However, even though the definition of AI is a moving target, it’s possible to arrive at a more intuitive definition by focusing on the practical outcome. At its most fundamental level, AI is a machine that does complex tasks that previously only humans could complete.
What Can AI Do?
AI is already a part of your eTMF. For example, the optical character recognition software (OCR) that converts scanned images into editable text meets most definitions of AI. Within our eTMF systems, we already have AI doing what AI does best: high volume and high repeatability tasks like sorting, classifying, and organizing automatically. AI is at work when your TMF system applies multiple document metadata attributes at once, when you sort documents alphabetically, when you search for text within a repository of documents, or when your signature request automatically goes to the correct distribution list. AI is most successful when working in predictable environments where speed and precision are the most important goals.
What Can’t AI Do?
AI struggles in some key areas, including many areas in which humans excel. For example, have you ever briefly looked at a few key documents in a TMF and intuitively felt something was wrong? Humans are especially good at extrapolating from very limited information. AI, on the other hand, would struggle to draw any useful conclusion when faced with erroneous or incomplete data (this is known as the garbage in, garbage out problem). AI also doesn’t have lived experience like you do. This means AI lacks common sense. A human would likely investigate a data entry indicating a subject was named Mickey Mouse. AI, however, would see the data entry as complete if all the field requirements are met. Finally, AI doesn’t have social or creative abilities. AI can’t understand the dynamics of a team and can’t communicate its reasoning in a narrative like humans can. Even with the most advanced AI technologies, it’s still up to humans to define a creative vision and build the interpersonal relationships necessary to achieve TMF success.
What Shouldn’t AI Do?
Human intelligence is flawed. Since AI is modeled after human intelligence, any AI technology will be flawed too. There are plenty of ethical issues with AI[1]. For example, AI can eliminate jobs, drive inequality, and perpetuate their human creators’ biases and errors. Simply put, AI can’t define our values and therefore shouldn’t attempt to substitute human judgement. This means that AI shouldn’t be making clinical trial safety decisions, resolving complex patient inclusion or exclusion decisions, or interpreting the level of risk in complex regulatory situations, just to name a few scenarios. It’s up to humans to ensure that no AI system, in the eTMF or elsewhere, places greater importance on its objectives than the shared values we’ve established as a human community.
What Should AI Do?
If AI is to create a better TMF and better world, AI and humans must find shared purpose to harmoniously coexist. One can imagine a world where AI augments human ability rather than supersedes it. An AI that augments human intelligence could reduce error through the precise and automatic handling of mundane tasks or improve TMF leadership decision-making and oversight by aggregating data in ways that would otherwise be too laborious or time consuming via other means. All of this, of course, would increase productivity, and in an industry that seeks to promote human health, this would mean lives saved.
There are many other less positive future scenarios for AI as we’ve discussed above. As humans, we have the unique ability to introspectively appraise our own goals, values, and limits. While no one person understands the technical and philosophical implications that come with the future of AI, it’s our shared responsibility that we intentionally and thoughtfully define these same goals, values, and limits for our AI (whether in the TMF or in the world at large). While there’s no clear answers, it would be a mistake not to recognize that the process of defining these limits is already underway. It’s time we all join in the conversation.
[1] https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/