🎧The AI-powered Era of Scientific Discovery Is Here

Dr. Bradley Love is building a tool that can predict the future

Every illustration/'AI & I.'

TL;DR: Today we’re releasing a new episode of our podcast AI & I. I go in depth with Dr. Bradley Love, a professor of cognitive and decision sciences in experimental psychology at University College London and one of the builders of BrainGPT, an LLM focused on assisting neuroscientific research. We dive into how AI is changing the way we do science by making predictions about the future. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.


Science is broken—and Dr. Bradley Love thinks AI might just fix it.

The problem with science is that it’s always been conducted in siloes. Let’s break this down with an example from the science closest to us, the science of our bodies. 

Imagine you’re suffering from an unnatural hankering for high-calorie, sugary, greasy food. Your endocrinologist would peg it to chronically high cortisol levels. Your gastroenterologist would blame your gut microbiome. A neuroscientist might study your hypothalamus, and your therapist might link it to seasonal depression. 

Of course, we all know that reality is much more complicated than any one of these singular explanations. But experts are unable to see the forest for the trees. Understandably so. The human brain is unable to fathom—let alone process—the complex nature of reality.  

AI doesn’t have these limitations. It can process large amounts of data across different domains, find patterns in it—and predict what will come next.

Dr. Love is putting this idea into practice as one of the creators of BrainGPT, an LLM that helps people conduct neuroscientific research. While most LLMs are good at looking back on information we already have, excelling at tasks like summarizing, BrainGPT is laser-focused on the future. The LLM capitalizes on AI’s ability to process large chunks of information to find patterns and make predictions about new situations. 

This shift toward predictions could fundamentally change the way research is done, and I sat down with Dr. Love to talk about what this means for the next generation of scientists. It was a fascinating conversation about the interplay between philosophy, science, and AI.

Apart from building BrainGPT, Dr. Love is also a professor of cognitive and decision sciences in experimental psychology at University College London, and a fellow at The Alan Turing Institute for data science and AI, as well as the European Lab for Learning & Intelligent Systems. His lab focuses on understanding how people learn and make decisions, combining behavioral, computational, and neuroscience principles. 

This is a must-watch for anyone interested in the future of science, AI, and how we understand the human mind. Here’s a taste:

  • Creating a collective mind with AI. Dr. Love thinks of individual research papers as incomplete contributions to an expanding field of knowledge, and wants to use AI to engage with this collective mass of information. According to him, individual papers are “flawed, noisy, and incomplete,” and he hopes that LLMs, based on a “tapestry of thousands of papers,” can generate a “signal in the correct direction.”
  • Mimic the brain's computing efficiency. Dr. Love suggests looking to the human brain for a solution to the increasing energy demands of the AI revolution. “Modern GPUs are amazing…but these data centers are going up every day and it’s stressing the grid, there's carbon impacts…whereas our brains are doing a lot of computation, but I guess you just have to eat a sandwich or something,” he says.
  • Build synergy between humans and machines. While building BrainGPT, Dr. Love discovered the best way to get the LLM to accurately predict whether a research hypothesis was correct was to compute the “perplexity” of the model, a metric that quantifies “how surprising the text is to the model,” where a lower perplexity correlates with accuracy. He thinks this is significant for “human-machine teaming” because “you could get a better result than either one alone.” 

How AI is reshaping scientific prediction and explanation

These are the ways in which Dr. Love believes the development of AI will challenge our conventional understanding of science: 

  • AI-guided experimentation. Dr. Love sees a future where we could use AI to answer complex questions like: “What experiment should I run next?” Taking it a step further, he thinks we could have models “generate different patterns of results,” and then compute the perplexity of each of these to surmise which is the most reasonable prediction.
  • Embracing the power of predictions. According to Dr. Love, using LLMs to make predictions in biological science might lead to accurate predictions without full comprehension of the underlying mechanisms, given the field's complexity. “I could see a world in which explanation and prediction unfortunately diverge…it's just because the world's really complex and our brains aren't built to make sense of this kind of stuff,” he adds.
  • Shifting cognitive paradigms. Even though he likes having crisp explanations for the world around him, Dr. Love thinks that as technology and science develops, what humans accept as an explanation will also evolve. “[H]ow we understand the world must be so different now than 500 years ago…so maybe it's just going to change again and we’re worrying about something that people in 100 years from now won't even think twice about,” he says. 

What Dr. Love would do if he could rebuild the world of science

Dr. Love cautions that our quest for simple, intuitive explanations in science may be limiting our ability to truly understand complex phenomena. He says there are “so many variables” in biology that even though “you could tell the clear, intuitive story,” it may not even be an “approximation of the real thing, or it’ll be so crude that it’ll obscure deeper truths.” If he had a free hand at shaping the next era of science, here’s what he would do:

  • Future-proofing for scientists. Dr. Love thinks that the next generation of scientists should have strong computational skills and be proficient at thinking philosophically. “I would do a combination of training people to be a little more philosophical about things and maybe do some reading and thinking about what explanations are and the limits and the study of it, but also more emphasis…on computational skills,” he says.
  • Bridging experiments and reality. According to Dr. Love, scientists should integrate controlled experiments with data gathered from the real world to avoid the risk of creating isolated, potentially irrelevant subfields of study. “I think if you're going to run lab studies, there has to be an interplay with something more naturalistic [in the] real world [like with] big data,” he concludes. 

You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:

Timestamps:
  1. Introduction: 00:01:00
  2. The motivations behind building a LLM that can predict the future: 00:01:58
  3. How studying the brain can solve the AI revolution’s energy problem: 00:11:14
  4. Dr. Love and his team have developed a new way to prompt AI: 00:13:32
  5. Dan’s take on how AI is changing science: 00:18:27
  6. Why clean scientific explanations are a thing of the past: 00:22:54
  7. How our understanding of explanations will evolve: 00:29:49
  8. Why Dr. Love thinks the way we do scientific research is flawed: 00:37:31
  9. Why humans are drawn to simple explanations: 00:40:42
  10. How Dr. Love would rebuild the field of science: 00:45:03

What do you use ChatGPT for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!

Miss an episode? Catch up on my recent conversations with LinkedIn cofounder Reid Hoffman, a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, founder and newsletter operator Ben Tossell, and others, and learn how they use AI to think, create, and relate.

If you’re enjoying my work, here are a few things I recommend:

The transcript of this episode is for paying subscribers.


Thanks to Rhea Purohit for editorial support.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Read this next:

Chain of Thought

Can a Startup Kill ChatGPT?

Google is dangerous—a founder cracked on Zyn and Diet Coke more so

2 Mar 15, 2024 by Dan Shipper

Chain of Thought

The Knowledge Economy Is Over. Welcome to the Allocation Economy

In the age of AI, every maker becomes a manager

10 Jan 19, 2024 by Dan Shipper

Chain of Thought

How Sora Works (and What It Means)

OpenAI's new text-to-video model heralds a new form of filmmaking

1 Feb 16, 2024 by Dan Shipper

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Craig Gordon 11 days ago

don't forget journalism as one place students really get to be trained on asking right questions

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login