Midjourney Prompt: "microscope sitting on a desk in a lab, close up, watercolor"

Against Explanations

AI can make progress where science has struggled

Like 106 Comments 3

Sponsored By: Reflect

This article is brought to you by Reflect, a frictionless note-taking app with a built-in AI assistant. Use it to generate summaries, list key takeaways or action items, or ask it anything you want. 

I think we should give up on explanations. Or, at least give up on explanations in areas of science where good explanations have been hard to come by like psychology. 

We put a premium on explanations because historically we’ve felt that unless we know “why” something happens, we can’t reliably predict it or change it. Sure, human intuition has been able to predict things in psychology for millennia without any clear “whys”. But intuition isn’t inspectable, debuggable, or transferrable. Only explanations—the domain of science—are.

That’s not true anymore. AI works like human intuition, but it has more of the properties of explanations. In other words, it is more inspectable, debuggable, and transferrable. 

AI models like GPT-4 can encode intuition and can make progress in places where science hasn’t yet. We should use it to benefit humanity. 

We can find the explanations later.

.   .   .

If you look at the history of psychology, various schools of thought have come and gone purporting to explain the underlying issues that create psychological problems like anxiety and depression. 

Freudians (and their descendants) think it’s about the conflict between the unconscious and the conscious parts of the mind. They also put great emphasis on how past experiences shape our experience of the present. Cognitive behaviorists think anxious feelings come from distorted thoughts, and if you change your distorted thoughts you’ll resolve your anxious feelings. ACT adherents believe that anxiety is a feedback loop that is reinforced by attention and that if we focus less on our anxiety and more on our behaviors the anxiety will become less of a problem on its own. Somatic experiencing adherents think anxiety is a trauma response that’s trapped in the body and needs to be released. The list goes on.

There’s probably truth to all of these theories. But when we look at the studies, we find that there’s no clear consensus that any of these work better than any others. (There are certain exceptions like exposure therapy for OCD which does appear to work better, but that is the exception rather than the rule.) Instead, the therapeutic alliance—literally the intuition-led relationship a therapist builds with their client—mediates most of the outcomes in psychotherapy regardless of the specific methodology used.

It doesn’t get much better in psychiatry. We’ve been prescribing antidepressants for 40 years but we still can’t explain why they work. We also don’t really know if they work for sure, despite lots of economic and research incentives to figure it out. They probably do for certain people and different ones work better or worse for different people with different conditions at different times. But we can’t predict it in advance, and we have no real unifying theories for why.

These are just the unresolved questions in psychology and psychiatry! Similar situations abound in every other area of what we usually call the “soft” sciences. In these fields, despite the best efforts of science to explain things, the intuition of human beings still reigns supreme.

This isn’t what science promised us. It promised elegant, unifying theories that explain how anxiety arises, or why anti-depressants work (or don’t.) It’s the romantic ideal that we got from Newton: He watched an apple fall from a tree and used it to come up with a simple formula to explain the motions of the moon. 

But what if what worked for physics is wrong-headed for psychology? What if we admitted that the world is very complicated and that these fields involve elaborate and often non-linear interactions between essentially infinite variables? What if we admitted that this makes it difficult to develop clear, parsimonious, reproducible explanations for any particular observation?

Historically, we jettisoned human intuition and fell in love with explanations because they make progress possible. Explanations help us make predictions that have the following properties: 

  1. They are inspectable. We can understand what gave rise to the prediction, why it might have been wrong, and what we can change to correct it. 
  2. They are debuggable. Because we know what gave rise to the prediction, if it turns out to be wrong, we can tell that our explanations are wrong—and learn to revise them.
  3. They are transferable. Because they are explicit, they can be written down and communicated succinctly. This allows them to spread and be built upon. 
Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@aaronmcahill over 2 years ago

I enjoyed this article! For the interested reader (and the author), here is some additional reading on the topic: 'Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning', by Yarkoni & Westfall (2017). https://journals.sagepub.com/doi/10.1177/1745691617693393

Dan Shipper over 2 years ago

@aaronmcahill thanks!! This paper is AWESOME, exactly what I’ve been thinking about. Are there others you’ve found like this?

Theo Barth about 1 year ago

Thanks for pointing me over to this article.