Midjourney/Every illustration.

Reid Hoffman on How AI Might Answer Our Biggest Questions

Learn how to use philosophy to run your business more effectively

34

Sponsored By: CommandBar

This essay is brought to you by CommandBar, the first AI user assistance platform.

You know all those clunky, unhelpful chatbots in the bottom right of apps? CommandBar is not that —it’s proactive user assistant that can be embedded into your product, perform actions, fetch data, and co-browse with you. Instead of just answering questions, it could say "I can just show you" and take over your mouse.

If you’re a product, CX, growth, or marketing person, try CommandBar today in your product.

ï»żGet startedï»ż

TL;DR: Today we’re releasing a new episode of our podcast How Do You Use ChatGPT? I go in depth with Reid Hoffman, cofounder of LinkedIn, author, and venture capitalist. We dive into understanding the way AI functions through the lens of philosophy and using it as a tool to make better business decisions. Watch on X or YouTube, or listen on Spotify or Apple Podcasts. 


Reid Hoffman thinks a masters in philosophy will help you run your business better than an MBA.

Reid is the cofounder of LinkedIn, a partner at venture capital firm Greylock Partners, the host of the Masters of Scale podcast, and a prolific author. But before he did any of these things, Reid studied philosophy—and by helping him understand how to think, it made him a better entrepreneur.

A good student of philosophy rigorously engages with questions about truth, human nature, and the meaning of life, and, over time, learns how to think clearly about the big picture. This is a powerful tool for founders faced with existential questions about their product, consumers, and competitors, and enables them to respond with well-reasoned answers and enviable clarity of thought.

This show is usually about the actionable ways in which people have incorporated ChatGPT into their lives, but in this episode, I sat down with Reid to tackle a deeper question: How is AI changing what it means to be human? How might it change the way we see ourselves and the world around us?

This episode is a must-watch for anyone curious about some of the bigger questions prompted by the rapid development of AI. Here’s a taste:

  • Study philosophy to be a better founder. Reid believes that philosophy is invaluable for entrepreneurs because it trains them to think about key questions they will encounter while building a business, like “how human beings are now” and “how they are as
the ecosystems we live in change.” His contrarian take is that “a background in philosophy is more important for entrepreneurship than an MBA.”
  • Broaden the horizons of what you know. Even outside of business school, Reid thinks philosophy is foundational to other areas of study like economics, game theory, and political science, and believes there are deep benefits in interdisciplinary thinking. “[S]ome of the most interesting people are [those] who are actually blending across disciplines within academia,” he explains.

I asked Reid how LLMs weigh into the long-standing debate between essentialism and nominalism, the two schools of thought that broadly divide the history of philosophy. Before we dive into the details, let me give you some context about my question. 

It seems like every company is rolling out a chatbot these days, but most are still pretty annoying to use. They output long, generic answers that are less personalized and useful than a human answer could be. CommandBar is here to change that by acting more like a conscientious human than a traditional chatbot.

CommandBar understands your users' needs, safely accesses account details, and can even co-browse to teach users how the UI works. It's also proactive, offering help and suggestions when your users seem lost, freeing up your support team to handle more complex issues. Upgrade your product with CommandBar and unleash your users.

ï»żGet startedï»ż

To begin with, here are a few pointers about LLMs that are relevant to our discussion:

  • Natural language processing (NLP) is a field of AI that focuses on helping computers understand human language. It's like teaching a computer to read and comprehend words the way you and I do.
  • Embeddings are a technique used in NLP where qualitative data, like words or phrases, are converted into a language that computers can understand, like numbers. To understand how this is done, imagine mapping the qualitative data in a space in a way that preserves as much of the context and meaning of the original data as possible. For instance, words that appear in similar contexts, like “apple” and “orange,” will be closer together than words that are unrelated, like “apple” and “king.” This proximity is what gives embeddings their power, as algorithms can now perform mathematical operations on words, essentially treating them like numbers.
  • Next-token prediction is a process where, after learning from lots of text, the LLM tries to guess the next word in a sentence. It's similar to when you're texting and your phone suggests the next word you might want to type. The model calculates the probabilities of many possible next words and chooses the most likely one.

Here’s a brief primer on the concepts of philosophy in the context of which we discuss LLMs and the way they function:

  • Essentialism is the idea that things have a set of characteristics that make them what they are, and these characteristics are both necessary and inherent to their identity. For example, in essentialism, the category of “dog” implies that there are certain features—like having four legs and a tail, and incessantly barking at doorbells—that are essential to something being recognized as a dog. These features are not just arbitrary but are believed to stem from the essence of what it means to be a dog. 
  • Nominalism argues that the categories that give our world logical structure are merely names we have agreed upon and do not necessarily reveal any inherent truth about the nature of things. To use the same example, the category “dog,” is simply a convenient way to group animals with certain features together, but there isn't necessarily a “dog-ness” that all dogs share beyond the features we've decided are important for our classification purposes.
  • The mid-20th-century philosopher Ludwig Wittgenstein is usually described as leaning toward essentialism in his early works, while adopting nominalism in the later years of his life. We refer to the former as early Wittgenstein and the latter as late Wittgenstein.

The main difference between essentialism and nominalism lies in how they view the relationship between things and the categories they belong to. Essentialism sees categories as reflecting underlying realities, while nominalism views them as constructs without inherent truths. Reid thinks LLMs don’t “resolve the debate” between essentialism and nominalism, but “add perspective and color” to it. Here are some of his specific thoughts on this topic:

  • Next-token prediction aligns with nominalism. Reid believes that the way LLMs operate, particularly next-token prediction, is more compatible with nominalism and the views of late Wittgenstein because the models operate based on statistical patterns without any understanding of inherent meanings. However, he adds that as LLMs become more advanced, they are being developed to “embody more essentialist characteristics,” such as “ground[ing] in truth [and] hav[ing] [fewer] hallucination[s].” 
  • LLMs are being developed to be more essentialist. Going one step deeper, Reid explains that LLMs are being developed to “make them much more reliable on a truth base.” “[W]e love the creativity and the generativity, but
for a huge amount of the really useful cases in terms of amplifying humanity, we want it to have a better truth sense,” he says.

I asked Reid how he thought about the operation of embeddings as being nominalist because the way qualitative data (like a word) is mapped in a space represents the essence of that data (the meaning and particular context in which that word is being used), aligning with an essentialist focus on the data’s inherent meaning. 

  • Embeddings, too, align with nominalism. Reid explains that embeddings are a “network” and the construction of the space in which, say a word, is mapped, and the factors that localize this word in that space, is more late Wittgenstein because it’s dependant on how the word is used in practice, rather than some deep underlying logical ordering.
  • Training LLMs on specific types of data. Reid also shares that we are trying to develop LLMs to reason better by training them on data that has really crisp reasoning. They are “learning machines, so you have to give a fairly substantive corpus of data for them to learn from.”

Our conversation also explores the contours of the broader relationship between philosophy and AI. Here’s a taste:

  • Technology is transforming our lives. According to Reid, our sense of self and our perception of the world around us is shaped by the technologies we engage with, including AI. “[W]e're not static as we are constituted by the technology that we engage and bring into our being,” he explains. 
  • Humans are evolving with technology. Reid believes that human beings themselves also evolve with technology in a “cultural evolution” that is much faster than biological or geological evolution, and AI is intensifying this process. “[P]art of what we're doing with AI and LLMs is [creating] tools to help accelerate that cultural/digital evolution,” he explains.

Reid also recommends actionable uses of ChatGPT for people who want to think more clearly and learn more about the philosophical models that underlie this kind of thinking. Here’s a glimpse:

  • ChatGPT to clarify your thought process. Reid believes that ChatGPT is a useful tool to study philosophy because it can help you refine your thoughts through an interactive process. He says, “I put in my argument [into ChatGPT] and say give me more arguments for this—how would you argue for this differently?... and then also how would you argue against it?”
  • Generate customized answers with ChatGPT. According to Reid, another way for an aspiring philosophy student to acquaint themself with fundamental theories is to use ChatGPT to generate tailored explanations. He recommends prompts like, “I’m a non-mathematical college graduate, explain Godel’s theorem to me,” or “I’m a non-physicist, explain Einstein’s thought experiments around relativity to me.”
  • ChatGPT as a research assistant. Another actionable use of ChatGPT that Reid points to is using the model as an on-demand, personal research assistant to help you solve problems better and more efficiently. “An immediate research assistant is one of the things that is obviously here already today and if you don't think you need a research assistant, it's because you just haven't thought about it enough,” he remarks.

You can check out the episode on X, Spotify, Apple Podcasts, or YouTube. Links and timestamps are below:

Timestamps:
  1.  Introduction: 00:01:58
  2.  Why philosophy will make you a better founder: 00:04:35
  3.  The fundamental problem with “trolley problems”: 00:08:22
  4.  How AI is changing the essentialism v. nominalism debate: 00:14:27
  5.  Why embeddings align with nominalism: 00:29:33
  6.  How LLMs are being trained to reason better: 00:34:26
  7.  How technology changes the way we see ourselves and the world around us: 00:44:52
  8.  Why most psychology literature is wrong: 00:46:24
  9.  Why philosophers didn’t come up with AI: 00:52:46
  10.  How to use ChatGPT to be more philosophically inclined: 00:56:30

What do you use ChatGPT for? Have you found any interesting or surprising use cases? We want to hear from you—and we might even interview you. Reply here to talk to me!

Miss an episode? Catch up on my recent conversations with founder and newsletter operator Ben Tossell, a16z Podcast host Steph Smith, economist Tyler Cowen, writer and entrepreneur David Perell, Notion engineer Linus Lee, and others, and learn how they use ChatGPT.

If you’re enjoying my work, here are a few things I recommend:

The transcript of this episode is for paying subscribers.


Thanks to Rhea Purohit for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast How Do You Use ChatGPT? You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Thanks to our Sponsor: CommandBar

Thanks again to our sponsor CommandBar, the AI user assistance tool. CommandBar is trusted by forward-thinking companies like Gusto and Freshworks to provide bespoke, automated support and increase user happiness.

With CommandBar, your support becomes more efficient, your users more empowered, and your product team more effective. Transform your user experience with CommandBar today—the product that unleashes your users.

ï»żGet startedï»ż

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day