Midjourney Prompt: "microscope sitting on a desk in a lab, close up, watercolor"

Against Explanations

AI can make progress where science has struggled

55 3

Sponsored By: Reflect

This article is brought to you by Reflect, a frictionless note-taking app with a built-in AI assistant. Use it to generate summaries, list key takeaways or action items, or ask it anything you want. 

I think we should give up on explanations. Or, at least give up on explanations in areas of science where good explanations have been hard to come by like psychology. 

We put a premium on explanations because historically we’ve felt that unless we know “why” something happens, we can’t reliably predict it or change it. Sure, human intuition has been able to predict things in psychology for millennia without any clear “whys”. But intuition isn’t inspectable, debuggable, or transferrable. Only explanations—the domain of science—are.

That’s not true anymore. AI works like human intuition, but it has more of the properties of explanations. In other words, it is more inspectable, debuggable, and transferrable. 

AI models like GPT-4 can encode intuition and can make progress in places where science hasn’t yet. We should use it to benefit humanity. 

We can find the explanations later.

.   .   .

If you look at the history of psychology, various schools of thought have come and gone purporting to explain the underlying issues that create psychological problems like anxiety and depression. 

Freudians (and their descendants) think it’s about the conflict between the unconscious and the conscious parts of the mind. They also put great emphasis on how past experiences shape our experience of the present. Cognitive behaviorists think anxious feelings come from distorted thoughts, and if you change your distorted thoughts you’ll resolve your anxious feelings. ACT adherents believe that anxiety is a feedback loop that is reinforced by attention and that if we focus less on our anxiety and more on our behaviors the anxiety will become less of a problem on its own. Somatic experiencing adherents think anxiety is a trauma response that’s trapped in the body and needs to be released. The list goes on.

Imagine combining ChatGPT with Apple Notes. That's what it feels like using Reflect – an ultra-fast notes app with an AI assistant built in directly. Use the AI assistant to organize your notes and thoughts, improve your writing and boost your productivity.

Reflect also uses Whisper from OpenAI to transcribe voice notes with near human-level accuracy. That means you can use Reflect to ramble about a topic, and then have the AI assistant turn it into an article outline. 

Start a free trial with Reflect to transform your note-taking using AI. 

There’s probably truth to all of these theories. But when we look at the studies, we find that there’s no clear consensus that any of these work better than any others. (There are certain exceptions like exposure therapy for OCD which does appear to work better, but that is the exception rather than the rule.) Instead, the therapeutic alliance—literally the intuition-led relationship a therapist builds with their client—mediates most of the outcomes in psychotherapy regardless of the specific methodology used.

It doesn’t get much better in psychiatry. We’ve been prescribing antidepressants for 40 years but we still can’t explain why they work. We also don’t really know if they work for sure, despite lots of economic and research incentives to figure it out. They probably do for certain people and different ones work better or worse for different people with different conditions at different times. But we can’t predict it in advance, and we have no real unifying theories for why.

These are just the unresolved questions in psychology and psychiatry! Similar situations abound in every other area of what we usually call the “soft” sciences. In these fields, despite the best efforts of science to explain things, the intuition of human beings still reigns supreme.

This isn’t what science promised us. It promised elegant, unifying theories that explain how anxiety arises, or why anti-depressants work (or don’t.) It’s the romantic ideal that we got from Newton: He watched an apple fall from a tree and used it to come up with a simple formula to explain the motions of the moon. 

But what if what worked for physics is wrong-headed for psychology? What if we admitted that the world is very complicated and that these fields involve elaborate and often non-linear interactions between essentially infinite variables? What if we admitted that this makes it difficult to develop clear, parsimonious, reproducible explanations for any particular observation?

Historically, we jettisoned human intuition and fell in love with explanations because they make progress possible. Explanations help us make predictions that have the following properties: 

  1. They are inspectable. We can understand what gave rise to the prediction, why it might have been wrong, and what we can change to correct it. 
  2. They are debuggable. Because we know what gave rise to the prediction, if it turns out to be wrong, we can tell that our explanations are wrong—and learn to revise them.
  3. They are transferable. Because they are explicit, they can be written down and communicated succinctly. This allows them to spread and be built upon. 

Scientific explanations ushered in the age of antibiotics and sent us to space. They built the atomic bomb and birthed the internet.

It would’ve been impossible to do any of these things with intuition alone because intuition is too personal and too vulnerable to superstition. Both intuition and explanation allow us to make predictions. Intuition says, “Trust me.” Explanations say, “See for yourself.” 

Expert human intuition—pattern recognition built from experience—works surprisingly well in problem domains that are resistant to scientific explanations like psychology. But human intuition lacks many of the best properties of scientific explanations: 

It is, to some degree, inspectable and debuggable but the process is slow, prone to errors, and vulnerable to pet explanations.

It is somewhat transferable, this is the role of storytelling. But stories only pave the way to intuition, they don’t transfer it outright. This means that experts with well-developed intuition might get really great at, for example, providing therapy to patients even in the absence of scientific explanations for their effectiveness. But the intuitions developed by those clinicians are trapped in their heads, and only available in a watered-down form as books, or talks, or textbooks.

AI changes this equation.

AI: encoded intuition

Today’s large language models like GPT-4 are very good at predicting what comes next. Given a prompt, they’ll give you a response. They see pairs of prompts and completions over and over again, and thus learn what’s likely to come next. They can encode the billions of different possibilities at play in a situation and learn to make reasonable predictions without needing explanations.

These models have a type of pattern recognition that is akin to human intuition, but with a few key differences:

  1. They are inspectable. We can’t see the chain of reasoning that gave rise to a particular prediction, but we can look at the training data that was given to them, the architecture of the model, and the prompt to understand why a prediction was made.
  2. They are debuggable. We can create theories about why a prediction was off, and change some of the inputs to the model to change its behavior. We can measure how much better or worse this makes the model, and keep making it better over time.
  3. They are transferable. If you have the dataset and the code, you can recreate the model. Or, if you have access to an API for the model you have access to what it knows without running it yourself. 

None of this is easy! And it can be very hard to understand and modify the behavior of machine learning models. But it is doable. 

You could imagine a branch of psychology that, instead of being obsessed with explanations, was instead focused on prediction using AI models. The game here would not be about coming up with elegant theories. Instead, it would be to generate predictions by gathering a high volume of accurate data and using it to train a model. 

For example, imagine a team of researchers focused on predicting the most effective intervention for anxiety in a given individual. They would gather a vast dataset including demographic information, genetic data, medical histories, lifestyle factors, therapy session transcripts, and prior treatment outcomes for thousands of patients with anxiety. They could then train an AI model on this dataset, allowing it to learn the patterns and relationships between various factors and treatment success.

Once the model is trained, researchers could use it to predict the most effective treatment for a new patient based on their unique profile. The AI model might not provide a clear-cut explanation for why a specific treatment works better for a certain patient, but its predictions could be more accurate than those derived from any single psychological theory.

As the AI model's predictions are tested in real-world settings, researchers can fine-tune its performance by updating the training data with new outcomes, identifying any biases or inaccuracies, and modifying the model's architecture as needed. This iterative process would help the model become even more effective at predicting good treatment options.

AI-All-The-Things

It’s presumptuous for a non-scientist to demand we AI-ALL-THE-THINGS, I know. In fact, what I’m suggesting is not science at all—it’s antithetical to it. Science demands that we explain, otherwise, it’s not science.

But as a guy who has spent countless hours and tens of thousands of dollars wading through the psychology literature and the therapy-industrial complex I have some experience with the limitations of science to explain and ameliorate my problems. Let me speak from that perspective:

Explanations have a functional purpose: they help us make our predictions better. Explanations used to be the only way to make our predictions better, but that’s not so anymore.

We now live in a world where we can suddenly make good predictions in domains where explanations seem to be hard to come by. I’m arguing we should wake up to this fact, and take advantage of it. Now that intuition-like predictions can be made by AI, scientific explanations are less of a bottleneck for progress. That’s a totally new situation in human history.

Moreover, if we do manage to build accurate predictors in fields like psychology using AI, we’ll have a new ground for science. It’s not easy to examine a neural network, but it's much easier to examine a neural network than it is to examine a brain. Neural networks that predict accurately might be studied to find the underlying rules they’re following—and that could help us find new scientific explanations for phenomena that we never would have considered before.

None of this means we should leave scientific explanations behind. It just means we don’t have to wait for them.


I’m launching a course to teach you how to build your own chatbot. It's an online cohort-based course that will run once a week for four weeks starting May 1st. It costs $2,000, but Every subscribers get early access to it for $1,500. 20+ students are already signed up, and spots are limited. Want to learn to build in AI? Learn more.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Thanks to our Sponsor: Reflect

Transform your note-taking with Reflect using our new AI assistant. Have it list to-do lists from your notes, rephrase your writing, and even generate counter arguments. 

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@aaronmcahill over 1 year ago

I enjoyed this article! For the interested reader (and the author), here is some additional reading on the topic: 'Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning', by Yarkoni & Westfall (2017). https://journals.sagepub.com/doi/10.1177/1745691617693393

Dan Shipper over 1 year ago

@aaronmcahill thanks!! This paper is AWESOME, exactly what I’ve been thinking about. Are there others you’ve found like this?

Theo Barth about 1 month ago

Thanks for pointing me over to this article.

Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day