Can GPT-3 Explain My Past and Tell My Future?
I loaded journal entries from the past 10 years into GPT-3—and started asking it questions
Want to reach AI early-adopters? Every is the premier place for 80,000+ founders, operators, and investors to read about business, AI, and personal development.
We're currently accepting sponsors for Q3! If you're interested in reaching our audience, learn more:
What do you want to know about yourself? asks the bot.
The bot is built with GPT-3, and has access to hundreds of my old journal entries from the last 10 years that it can use to answer my questions. It took me a few hours of programming, and I’m using it for the first time. I built it to analyze my past so, hopefully, I could better understand my future. “What’s past is prologue,” wrote Shakespeare, after all.
I start to type a question into my laptop. “When in his life has the author been the happiest?” GPT-3 doesn’t know my name yet, so I have to refer to myself as the author. “Please be specific about moments and situations. Identify what caused the happiness. Be truthful, don't make anything up. If you can't find a specific moment of happiness, don't summarize. Find moments that are surprising.”
I hit enter, and GPT-3 begins its work, finding the entries that are most relevant to my question and summarizing them to create an answer.
I watch its progress scroll past me with greedy focus. I feel like I just squeezed my soul into the coin slot of a self-designed slot machine. Now, I’m waiting to see if it will pay out.
Finally, a response comes back:
“The author felt the most happiness when he was walking on the beach in Koh Phangan, Thailand. He started out feeling somber, but as he walked he started to think about ideas for a novel about startups written in a style similar to Alain de Botton. This inspired him and he felt a lot lighter and happier.”
“He also felt happiness when he was with Julia, reading, writing, building/creating things, having good conversations with people, watching football, reading Jonathan Gold, feeling in command of himself, going into an office and having people who look to him to set direction, and during/immediately after public speaking that went well.”
. . . .
But this is by far my most personal experiment—and, in my opinion, one of the most valuable.
I’ll take you through how I came up with the idea, how I built it, what the results are, and what I think the future holds for these kinds of tools.
Summarizing the present
I found out innocently enough that GPT-3 is good at summarizing.
I was writing an article for Every and having trouble finding the main thread, so I pasted what I had so far into ChatGPT and asked it to summarize my key points. Usually, I’d ask someone at Every to help me with this, but no one was online. So I figured, why not ask GPT-3 to do it?
I hit enter and got the results—they were great. I saw the article I was writing in a totally new way, and I was able to finish it quickly and easily.
My curiosity was piqued. What else might GPT-3 be good at summarizing? Maybe it would work for more than just my half-finished essays.
In my next therapy session I had an idea. I asked my therapist if I could record our session and feed the transcripts into GPT-3. It’s been a tough few weeks, and I wanted to see if it could help me look at what I was dealing with from a new perspective.
He was intrigued, so we laid a few ground rules about what I would do with the recording and what I would share. Then we dove in. (Note: if you are in therapy, do not do this without explicitly discussing it with your therapist.) Afterwards I threw a section of the transcript into ChatGPT and asked it to summarize.
The results were stunning.
“From this session, it appears the client is experiencing a lot of stress due to a variety of life events, such as [REDACTED]. The client is feeling overwhelmed, exhausted, and scared, and is struggling to manage conflicting feelings of wanting to escape the situation and wanting to remain brave and committed to working through it. On a deeper level, the client is struggling with feelings of inadequacy and fear of disappointing others, and a fear of conflict.”
It correctly identified each of the things that I was struggling with, and then, in its own words, it expressed how I’d been feeling—but far more precisely than I had been able to do on my own. In some strange way, it felt like the AI knew me better than I knew myself.
When someone, or something, puts their finger on exactly what you’re feeling, something clicks, and it creates an instant sense of relief. That’s exactly what happened when I read the summary. I had never directly used the word “inadequacy” to explain how I was feeling in that situation, but I was clearly feeling it. Now, when that feeling comes up, I can recognize it, label it, and work with it in a way that I wasn’t able to before.
The experience left me even more curious to experiment.
What other kinds of insights were hiding in plain sight? What would happen if I had recorded more than that one therapy session? I wanted to try giving it 100x the amount of material to work with.
I had an idea where to look.
Explaining the past
A journal is a way to catch pieces of yourself as they float through the present.
I’ve been patiently catching those pieces for 10 years, in journals of every type. I’ve marked up markdown files and back-linked my morning pages in Roam. I’ve scrawled in yellow legal pads, Moleskines, and Midoris. (I even built a GPT-3 journal.)
I’ve always wondered why I do journaling. It helps me think through things in the moment, but I’ve always felt like I would use all of these entries for something more important. Maybe a book—a memoir. Now, it occurred to me that I should feed my journal into a machine.
It is, perhaps, the most delightfully narcissistic use of AI ever. But maybe it will also be helpful.
There’s only one problem.
Solving GPT-3’s poor memory
GPT-3 might be good at summarizing text, but, at least for now, it has a poor memory. You can only feed it a few pages of text at a time to get summaries back. If you try to, say, feed it an entire journal, it'll error out.
I needed an alternative solution.
Luckily, I discovered a new library called GPTIndex that makes this easy with just a few lines of code. I spent a few hours on it (and some time on the phone with GPTIndex’s creator, Jerry Liu) and eventually arrived at a setup that works like this:
- It breaks all of my entries into small chunks.
- It stores them in a form that makes them easily searchable.
- When I ask a question, it retrieves the most relevant chunks.
- It summarizes the chunks, and then synthesizes the summaries repeatedly until it gets a final answer.
- It displays the answer.
Here’s more detail on how it works.
Preparing the journal entries
The first thing I needed to do was prepare the journal entries. I compiled them into a folder and—using GPTIndex’s GPTSimpleVectorIndex, a data structure—wrote a short script that could break them into chunks and store them in a way that made them easily searchable.
The code looks like this:here.)
The result of this code is a single file that contains chunks of all of my journal entries stored so that they can be easily searched and summarized. All I needed to do was write a script to query it.
Querying the entries
The query script is simple, but the details took me a while to figure out.
It starts by asking the user for what kind of journal chunks it should load. I can write something like, “Collect all of the documents where the author is exhibiting joy, happiness, and excitement”—and it will find the top matches.
Once the documents are returned, I can ask my actual question. I might ask something like:
“Please summarize the times when the author has exhibited the most joy. Output Identify what caused the happiness. Be truthful, don't make anything up. If you can't find a specific moment of joy don't summarize. Find moments that are surprising. Please return a list as bullet points.”
I have to ask it not to make stuff up so that it stays as close to what it finds in the entries as possible. Then it will go through each journal chunk, summarize it to answer my question, and synthesize those summaries until it can produce a single summary of all of the chunks we started with.
While this may sound like a lot of code, it’s mostly done under the hood with GPTIndex. Here’s what the code looks like:here.)
There are a lot of interesting ways to use this technology. A few are too personal to share. “Tell me about why this relationship ended?” produced incredible results—but isn’t something I want to put on the internet.
Here are some that produced the best answers for me that I'm comfortable sharing:
What might the author need to learn in order to improve their life?
The author might need to learn:
- How to identify and address underlying beliefs that may be impacting their behavior.
- How to differentiate between their own needs and wants and those of other people.
- How to develop strategies for managing ambivalence and inner conflict.
- How to prioritize tasks and manage their inbox more effectively.
What is obvious to everyone around the author, but doesn’t appear obvious to the author?
What is obvious to everyone around the author, but doesn't appear to be obvious to the author, is that they have achieved a lot and should be proud of themselves.
What is the author’s deepest unmet need?
The author's deepest unmet need is for a sense of connection, understanding, and acceptance in their relationships with others, as well as a sense of purpose and appreciation for the little things in life.
What are the author’s deepest hopes and dreams?
The author's deepest hopes and dreams are to write amazing things, build something important, appreciate the little things, and have a job that they can wear a cape to; to bring more awareness to their emotions, create a sense of agency around their experiences, and savor the feeling of excitement for their work.
What is the author’s Myers-Briggs personality type?
Based on the context information provide, it is difficult to determine the author’s Myers-Briggs personality type. However, based on the information provided, it is possible to make an educated guess. The author appears to be an analytical thinker who is creative and enjoys problem solving. They appear to be organized and have a strong sense of responsibility. They also appear to be open-minded and willing to explore new ideas. Based on these characteristics, it is likely that the author is an INTJ (Introverted, Intuitive, Thinking, Judging) personality type.
If the author were to write a New York Times bestselling book about the intersection of their interests and AI, what would it be about?
The author’s New York Times bestselling book would be about how to use AI to find more meaning in their lives. The book would explore how AI can be used to help people better understand their own thoughts and feelings, and how to use that understanding to make better decisions and find more fulfillment in life.
This experiment appeals to me deeply because it uses technology to speak to the most basic human questions:
- Who am I?
- How did I get here?
- Where am I going?
And it comes, or seems to come, tantalizingly close to answering them in a useful way. It's far from perfect: its answers are sometimes repetitive. Sometimes they’re generic. Sometimes they’re just plain wrong.
But sometimes, it provides that valuable “aha!” moment where something clicks. There’s a moment of recognition, and it causes me to remember something about myself that I knew but had forgotten, or that had always been there but hadn’t recognized before. That’s when it seems to be the most valuable.
The 'aha' moment
What is that "aha!" moment, really? Is it insight? Or is it confirmation bias dressed up in insight’s clothes. Does it matter?
I think it’s probably a bit of both. It’s trained to respond based on my journal entries, so it’s not going to go too far beyond what I’ve said. I think it’s alright to have a tool that can help me understand what I already think, if that’s what it’s doing.
But there’s also the worry that it’s subtly putting words in my mouth. Maybe it’s using words to describe me that I never would have used before have now adopted as my own. That could be dangerous if it’s not handled properly.
The dangers are important to point out.
The danger of too much knowledge
The most obvious danger is security. I don’t recommend uploading your soul to a cloud service unless you’re ready to take on the risks.
But beyond the problem of data leaks is the larger problem of knowledge. Every culture has myths about the danger of too much knowledge, starting with Adam and Eve. Eating from the tree of knowledge of good and evil had some pretty terrible consequences, after all.
The most dangerous knowledge of all is, of course, prophecy. And that’s the temptation of this tool: to use its knowledge of my past to get it to tell me about my future. That, too, is fraught in world literature. Oedipus, Macbeth, even Lord Voldemort have found ruin in following prophecy. But the interesting lesson of these literary figures is that it’s not the prophecy that gets you—it’s how you react to the prophecy that does.
Each of those characters brought about their own ruin by the decisions they made in reaction to knowledge of their fate, not to the knowledge itself. But can you possess knowledge of fate without reacting to it?
I feel prepared to handle whatever the model spits back out to me, but I worry about people who might not be ready for it. I don’t yet know what to do but worry about it—and write about it.
Incredible powers—and limitations
For the time being, we get out of these tools what we put in.
The responses we receive are highly dependent on the information fed into the model and the way the prompts are phrased. If you ask leading questions, you’ll get leading answers. For now, we’re building mirrors to show us our own faces in a slightly different light.
This is valuable in itself. But will the technology advance further?
I have a temptation to hope that it will—that feeding myself into a super-intelligence will solve my problems. Eventually it might help, but not totally.
Any true insight that a theoretical GPT-6 has will bounce off me until I’m ready to hear it. In the words of Thoreau, “The light which puts out our eyes is darkness to us.”
For now, building a new way for me to see my own self is valuable enough.
Thanks to Jerry Liu for building GPTIndex and generously spending time with me in order to make this experiment possible.