Transcript: ‘Reid Hoffman Makes Five Predictions About AI In 2026’

‘AI & I’ with Reid Hoffman

Like Comments

The transcript of AI & I with Reid Hoffman is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.

Timestamps

  1. Introduction: 00:00:52
  2. The future of work is an entrepreneurial mindset: 00:02:20
  3. Creation is addictive (and that’s okay): 00:05:22
  4. Why discourse around AI might get uglier this year: 00:09:22
  5. AI agents will break out of coding in 2026: 00:17:03
  6. What makes Anthropic’s Opus 4.5 such a good model: 00:24:18
  7. Who will win the agentic coding race: 00:28:46
  8. Why enterprise AI will finally land this year: 00:36:13
  9. How Reid defines AGI: 00:43:16
  10. The most underrated category to watch in AI right now: 00:55:33

Transcript

(00:00:00)

Dan Shipper

Reid, welcome to the show.

Reid Hoffman

It’s great to be back. And as much as I try to avoid doing predictions you’re one of the few people that I will essay this with.

Dan Shipper

I feel very blessed. Thank you for taking the time to do it with me. I think this is your third appearance on this podcast, and that makes you the most frequent guest. So, I’m honored.

Reid Hoffman

Feeling is mutual. I’m honored.

Dan Shipper

We’re heading into 2026. By the time this podcast comes out, it will be 2026. So for all of our purposes it is 2026. And I think this time of year is such a good time to look back and look forward. So I want to start with a couple of, you know, pre-2026 predictions that you made, and reflect a little bit on how things went in 2025 and what might be different about how you’re seeing things.

So, the first one is, we dug up a quote from you in 2017 that said you’ve thought that the 9-5 work model will be extinct by 2034. Where did that view come from and how has that changed in 2025 as we’ve moved into agentic territory?

Reid Hoffman

Well, let’s see. Part of it was an extension of a very old set of thoughts of mine—a startup view—that more and more of work, and more and more of a career, will become entrepreneurial. That doesn’t mean everyone is going to start companies or launch new products, but it does mean the old career model—the career ladder, the career escalator—is no longer the right way to think about it.

It’s no longer about What Color Is Your Parachute? It’s about thinking of your economic life—your work life, your job life—with the skills of an entrepreneur. That’s part of where the idea came from.

And it wasn’t meant as 9-to-5 like, Oh, everyone’s going to be working 9-to-6, or equivalent. Some people will—which would’ve been a good prediction, maybe, for Silicon Valley. (Yes, exactly.) And by the way, startups in Silicon Valley have always worked 9-to-6—frankly, 9-to-7—in how they operate.

But it’s more the fact that the way you’re going to be working isn’t going to be: clock in, hit your punch card at the door, take your lunch break, and leave at five. Instead, you’ll be running tools in parallel with what you’re doing. You’ll hit crunch periods where one week you’re doing a 120-hour week, and the next week you might be doing 40—or, as the case may be, 10.

That entrepreneurial journey is more of what’s going to be happening. And I think we’re still on track. Here we are in 2025 going into 2026—time of broadcast, 2026. If anything, as you begin to see the impacts of the fact that all of our work is going to be enmeshed in agents—in parallel, in management, and in all the things we’ll get into in some depth—that’s part and parcel of it. It’s not just 9-to-5.

Dan Shipper

Got it. When I read that quote, I was thinking, it’s not going to be 9-to-5, meaning we might not be working that much. But you’re saying it’s more an entrepreneurial way of working where it’s suffused throughout your life.

Reid Hoffman

Exactly. And that actually means—and by the way, that can be, in some cases, that you’re just not working as much. It’s a much higher range.

Dan Shipper

If you’re Tim Ferriss.

Reid Hoffman

Yes—Ferriss.

Dan Shipper

Yeah—he’s already been doing that. (I know, right?) The future. He’s gotta do a new Four-Hour Workweek.

Reid Hoffman

The future’s already here. It’s just unevenly distributed.

Dan Shipper

Yeah. That actually makes me think of one of my hot takes for 2026. We can jump there real quick, because I really want to know what you think.

We’ve been on this trajectory of talking about technology and addictive technologies and social media—and how social media breaks your brain. And I think we’ve put the act of creating things up on a pedestal as something that can be inherently good and not necessarily addictive.

But my experience with Claude Code right now is: I am addicted to it. I cannot stop. I just want one more prompt.

And I think, shockingly, the most addictive technology of 2026—the narrative we might be talking about at the end of the year—is how addictive it is to just make things. What’s interesting is, there’s a certain class of people who already know that: startup CEOs, who have that experience already. You’re always checking your chat, your Discord, your Slack—whatever—and thinking, Oh my God, I need to do something else.

But now I think that becomes a broadly distributed thing, where everyone’s just going to be prompting Claude Code.

Reid Hoffman

So: one, I definitely believe it can be addictive. And I think it’s addictive for a much broader range of people than we normally think—partly because most people just don’t have the experience of succeeding at creating. And once you have that—once you get that dopamine hit as you succeed at creating—it changes things.

And that’s part of what Claude Code—really, AI more generally, generative AI more generally—does. Suddenly it’s, Oh my God, I can create something interesting.

And I actually think that’s a healthy dopamine hit. One of the things that’s weird about the word “addiction” is you can say, well, I’m addicted to breathing. And it’s like: that’s a good thing, right? “Addiction” has this negative overlay, but the question is really: are you getting committed to something in a way that’s unhealthy?

With creation, it’s often not unhealthy. If you’re like, No, no—I’m going a little more obsessive. I want to finish this. I want to make this. I want to make this really great—that’s part of how we explore our fuller potential, our super-agency, if you will.

And I think that’s actually really good. I do think it’s part of the generative AI revolution in a way that people miss. The discourse right now is quite mixed—often negative—and I think it will get more intensely negative next year because of the transformations and changes.

But that’s part of why it’s so important for people to go. Wait a minute: I can be so much more human doing this. We can do this collectively together.

So yes, it’s going to be a turbulent, created future—but we can do amazing things. And I think this “creative addiction”—creative commitment, creative exploration—is actually one of the really important dynamics here.

People have been discovering it not just with Claude Code, but by learning through prompting agents, creating images—part of the reason Sora went to the moon in a couple weeks. It’s like wait, I can make something here.

Dan Shipper

That makes sense. I want to know—one thing you said earlier: you think there’s going to be, I don’t know if “backlash” is the right word, but negative sentiment toward tech will increase in 2026. Is that one of your big hypotheses?

I’ll convert this raw section into your established transcript style: clean speaker block, sentence-level tightening, consistent capitalization, and preserving the “meme” refrain and the Farmer McDonald line.

Reid Hoffman

So tell me about that. While there’s been a lot of discussion, the overall impacts of AI have been more minimally felt so far—and most of the places where they’re described as being felt are, in fact, kind of fictional.

For example: Oh, AI is causing electricity prices to rise. And really—yes, maybe a little bit here and there, in certain grids, certain power stations. But a lot of what you’re seeing is old grids, old power stations, increasing cost of energy, the net impact of tariffs, and other kinds of things.

If you actually do an analytic map—say, Where are the data centers?—that doesn’t correlate to, Oh, those are the places where electricity prices have gone up. Not really. But that’s going to be a meme.

And so the meme is going to be: Oh, college students aren’t going to get hired because of AI. The meme is going to be: Electricity prices are going up because of AI. The meme is going to be: The price of eggs is going up because of AI.

Because there are a lot of people who look around for something to blame for things being troubled—bad, different than they would like. And it is going to be a very turbulent year. So AI is going to become the catch-all. Almost like the Farmer McDonald song—AI is going to be the way this is going to play.

(00:10:00)

And I think it’s actually really important for people to understand this. AI hasn’t had most of that impact yet—but it’s going to start.

For example, it’ll suddenly be: Hey, I used to be really competent at my marketing job… And now things are shifting. It’ll be: Hey, I only want to hire when it’s part of an AI transformation—like Shopify, and that kind of thing.

A lot of what’s happening in employment isn’t actually AI; it’s a reworking of the COVID-era disaster—mis-hiring, disorganization, and so forth. But AI is going to start impacting things. So it moves from, call it, 98–99 percent fictional to 90 percent fictional.

And that will intensify the desire to say a whole bunch of negative things. For example, I’ve been surprised so far—though I think it’s just because people don’t pay attention online—that when I created a Christmas record for my friends using AI, I didn’t get a whole bunch of negative blowback: Oh, this is terrible for artists and terrible for creatives, and so forth. I think that will happen. I’m going to create some more records, and I think that will be the case.

And I actually think it’s not the case—you just need to adjust to using it, and to creating with it, as a new basis for your creativity, for your industry, for your work. And that transition is what’s going to be difficult.

But I think next year is going to be much more negative on AI than this year, in general popular discourse.

Dan Shipper

So to repeat that back: so far it’s a meme—AI is bad. And to a large extent, the meme is making AI a scapegoat for anything bad. If you’re laying people off, it’s easy to say, because of AI. And that will probably continue.

But there will also be increasing real negative impacts that people are going to have to deal with. So you’re a programmer and you come into work and you’re like, Oh man—my job just totally changed. I’m not in the code anymore. And that’s going to be upsetting to people. It’s going to lead to changes in the way organizations are run, who gets hired, and all of that.

So what do you think is the right move for big AI companies in an environment like that—how they should be talking about it, how they should be positioning? And to some extent, it’s probably not even desirable to prevent backlash. It’s normal for people to have bad feelings about new things.

But strategically: what’s the right way to deal with that?

Reid Hoffman

Well, the most substantive way is to make it pragmatically helpful to as many people as you can.

It’s part of the reason why the podcast you and I are doing—and other things like it—matters: to say, Hey, explore it. Use it. Get a chance to try it.

You can use it for personal things. If you have any serious medical question and you’re not getting a second opinion from ChatGPT or your favorite frontier model, you and your doctor are both making mistakes.

And similarly: How do I use it to help me with my work? How do I use it to help me learn things? How do I use it to help me be creative? If you can’t, in each of those areas, find something where it’s seriously helpful—you’re not trying hard enough. You’re not looking. It doesn’t mean it’s everything. It’s not the Swiss Army knife for everything yet. There are many limitations.

But it is enormously amplifying.

And that’s part of the reason why everything—from writing Superagency to creating holiday Christmas gift records—is about showing: Hey, this is a thing we can do now. Everyone can do this—without having specialized tools for it. And not only can everyone do it, but as people get more expert—people who are much better at music than I am, which is 95 percent of the human race—they can do much better, right? It’s an amplifier for everybody.

I think that’s the most substantive thing.

And then on the communications side: one thing that various very well-meaning AI creators are saying is like, Oh my God—it’s going to be a white-collar bloodbath, etc.

Dan Shipper

And you’re like—well, I think I know one person you have in mind. I have one person in mind that you’re talking about. (Yes.)

Reid Hoffman

And it’s like: look, I get it. You’re trying to say, Hey, guys—things are going to change a whole lot. Really pay attention. I’m ringing a bell so you start adjusting to this.

But ringing the bell that way is like yelling “fire” in a movie theater. It doesn’t create a productive response. The important thing is to orient people toward a productive response.

That doesn’t mean papering over the difficulties of transition. But it’s like: We’re going into these intense Category 10 rapids—and here are the paddles you need, and here’s what you should be doing as you go into it.

Dan Shipper

Right. If you’re going to say we’re going into the rapids, you want to offer the paddles too. If you’re just saying we’re going into the rapids, that’s not really helpful—in my view.

Reid Hoffman

Yeah. Yes, exactly. And that’s the comms part of it—for everybody.

Dan Shipper

Yeah. If 2025 was the year of agents, what’s 2026?

Reid Hoffman

Well—by the way, I think there’s an interesting thing there. I don’t think 2025 was fully the year of agents. There was a lot of agent development, but I think it was mostly agents in code, right? Claude Code, Codex, et cetera—of which, by the way, a relatively small percentage of humanity actually experienced.

If you go to the vast majority of people you and I know, they’re like: Well, what do you mean agents? I asked ChatGPT a few questions and had some dialogue. And it’s like—well, no. That’s a chatbot. It’s not really agents.

Agents are doing stuff—doing it in parallel, doing it in amplification, and so forth. So code had that.

But what I think 2026 will be is how we move from this basis of agentic coding agents to agents in everything else. I think there’s going to be a whole bunch of that.

For example: call it 10x to 100x more people will experience what it is to have their computer running separately from them—doing something productive for them—as they walk away to go get coffee, and then come back. Whether it’s Mac minis running Claude Code, or Codex—different flavors, but the same basic idea—applied to a lot of other things.

Because that orchestration—what allows parallelism, what allows eight hours of work to happen in the background—is going to get much broader.

And then the more subtle thing, which I think will also be really important in 2026, is orchestration itself. Namely: if I’m doing intellectual work—knowledge work, thinking work, cognition work—and I now have agents working with me, for me, and I’m orchestrating them…

I think orchestration is the thing. I don’t think it’ll be March 2026. I think it’ll be more like Q4 2026—or growing into that—and then maybe even more intensively in 2027.

Dan Shipper

I totally agree with that. I think it’s something we’re starting to see already.

And it brings me to perhaps my hottest take—one I’d really love your input on—and it starts with coding agents.

I think OpenAI is currently missing the real coding market. Because when you think about orchestration, it’s enabled by tools—but it’s also a new skill. A new skill for programmers.

And when I look at what OpenAI is producing, I think it’s really made for programmers who use AI—senior engineers who use AI—which is different from AI-native engineers who live in Claude Code terminals and are never looking at the code.

The models they make are really good. If I have a really hard technical challenge, I’ll definitely go to Codex—like: Figure out this crazy performance bug I can’t track down.

But I don’t see them orienting toward this new skill. It’s not vibe coding, but it’s not traditional engineering-with-AI-added, either. It’s this third thing: I’ve got four Claude tabs open. I never look at the code. I’m thinking about how to orchestrate. I’m thinking about how to plan. I’m doing all this stuff—and I’m technical, so I could go down to the code, but I never do.

(00:20:00)

Dan Shipper

And I think that’s a really interesting thing I’m noticing. OpenAI is not used to being behind, and I’m very curious how that’s going to play out. What do you think?

Reid Hoffman

Well, I think it’s one of the skills OpenAI is going to pick up. Part of what’s happening—and this will be great for the media—is that each month it’ll be a horse race. It’ll be: Oh my God—Opus 4.5. Oh my God—GPT Codex. Oh my God—Gemini. Because all of them are going to be developing.

Structurally, what that means is: instead of a couple years where it was literally just OpenAI blazing ahead—which, by the way, I think is good for the world and everything else—there’ll be areas where other companies make super smart moves.

Anthropic did super smart stuff with Claude Code and that iteration. And they did it with, as it were, less capital and less depth of compute—but still made something pretty amazing.

And I think OpenAI will respond. This is one of the ways competition benefits industry and benefits society. It’ll make them pick it up and go: We can’t be behind on this. We’ve got to learn to do this. We’ve got to make this happen.

And I think that’s what will happen. It’ll be painful—competition frequently is painful as you push your way forward—but I have a pretty strong belief that will be the end result.

Now, I do think it’s worth giving credit here: the notion of focusing on code is not just a code product. It’s an amplification of many, many other things—an amplification of AI progress and development, but also an amplification of, frankly, every other form of information and knowledge work—and maybe even many more things beyond that.

And I think that’s one of the reasons why, frankly, every major player has to be capable in code at minimum, if not leading.

Dan Shipper

Yeah. It’s such an interesting point. They got to a general-purpose agent architecture by making a great coding agent with all the right primitives.

And I’ve got to tell you: if you look at the software we’ve developed over the last month or so—ever since Opus 4.5 came out—pretty much every new thing we’re building is just Claude Code in the trenches.

I built this entire end-to-end reading app. We have this AI paralegal we’ve been working on for a while that just got a huge upgrade. And every single app is basically: UI wired to—when you press a button, it hits a prompt, that prompt has an agent, the agent has a bunch of tools, and it does the thing you want it to do.

It is the coolest way to build software because it’s so much more flexible. Users can modify it. It’s just—exactly right. And it’s such a pleasure to see someone figure out those primitives.

Reid Hoffman

Yep. And massive credit to the Anthropic team for doing that. And basically, for everyone else: hey—you should be learning from it, building on top of it, and iterating toward the next generation of the whole set.

Dan Shipper

Do you have a thought on why Opus is so good—why Opus 4.5 is so good?

I’m assuming you think it’s that good. I think it’s the best model I’ve ever used. It feels like this crazy leap for me. I’m curious if you agree—and if you do agree, do you have any thoughts on how they managed to do that?

Reid Hoffman

Well, I think it’s amazingly good. I don’t know if it’s the everything model for me. I mean, to some degree, I think GPT-5 Pro with Codex is also pretty amazing on a lot of levels. And by the way, Gemini 3 on science topics and so forth.

So I’m still in a place where I bring all three of them with me to various things I do.

Now, that being said: I am very curious how they pulled 4.5 together. One of the mistakes outsiders make is they think, Oh, you just apply scale—you press play on compute—and some of it works and some of it doesn’t.

And actually, there’s both science and art to doing it. It’s one of the reasons why, obviously, Meta has needed to restart its AI efforts—because you can’t just go, Oh, I throw a whole bunch of compute at it and it works. You have to relearn these things in terms of how it’s playing.

So I think we’ll learn. One of the things is that techniques spread very quickly. But I actually don’t know what the new genius was in Opus 4.5.

Do you have any hypotheses?

Dan Shipper

I have no idea. The only thing I can think of is: recently we got a view of the underlying “soul document” for Claude.

And the interesting thing I feel from Opus—and I agree, ChatGPT is my daily driver, to be clear. I use it for everything. But when I’m building software—except for specific performance things, or hard bugs—I’m using Opus as my daily driver.

I think there’s usually this tradeoff you see a little bit with Codex: the better it is at programming, the less empathetic it is. It feels a little bit more like a senior engineer—more rigid, less user-centered.

With Opus, they seem to have figured out how to make it both humanistic—able to understand users, what I might want, what I might mean—and also how interfaces work, what a good interface is. It’s a fantastic programmer.

And something about that “soul document”—where it tells it this is who you are, what you care about—feels like one example of Anthropic thinking about these things in a more holistic way: creating a being rather than a tool.

And I think that’s going to be a big deal going forward.

Reid Hoffman

You know, it’s interesting—this is one of the things Inflection started with: EQ. And “soul” is a very natural extension of that, because Inflection started—and there are still a lot of ways in which Pi is among the leading conversation agents in having a richly textured conversational experience, focusing on EQ as much as IQ. Not a slouch on IQ, but putting the two together.

And a “soul document” may be the next step—because this is what we learn and iterate. It’s part of what, of course, makes Claude Code work: it’s a really good human amplifier. It’s like—how do you operate that way? And you get better performance if you can interact in the right way.

So I think that’s a good insight. I suspect there are other things too—we both suspect there are other things too—and we’ll hopefully learn them in the next few months.

Dan Shipper

That would be great.

So last thing on the coding front. You mentioned the horse race earlier—everyone trading volleys. But let’s say we don’t want to be fooled by randomness. We don’t want to track every little change. We hit the snooze button and come back at the end of 2026.

Where do you see the landscape—who’s winning in the coding-agent race?

Reid Hoffman

Well, I don’t know who will be winning. But what I would predict strongly is that the horses that are leading now will still be neck and neck.

It’ll be like: in the first hundred meters, this one’s a little ahead. Then the next hundred meters, that one’s a little ahead. And so on. I don’t think any of the horses that are in the race will particularly stumble. I don’t think you’ll look up and go, Wow, I thought Cursor was fantastic—and it’s just gone. I don’t think any of them will stumble.

Now, I do think what will be interesting is the folks who are not in this at all—say, the easy one to pick on: Apple. Despite the fact we use Macs for various things, the AI part of it is… you know, non-existent. I think the gap will be even more stunning—the fact that you haven’t really internalized what this coding amplification—and everything it implies—means. And I think that will play out more.

But for the leaders: I think they’ll all be in the mix. And what will be interesting won’t be which one stumbled out. I’m more curious about what one or two superstars will get into the mix more. Will Replit become more general? Will Lovable become more general? Will it be those—or will it be something else?

And with pretty high probability, something will surprise us here.

(00:30:00)

Dan Shipper

I don’t know what it’ll be, but yes—predicting surprise. I think that’s interesting.

One of the things I’ve been toying with is: the stakes are so high, and programming is such an obvious use case—so economically valuable—that it feels like everyone is in a knife fight for programming.

And I wonder if the surprise entrance comes from somewhere else—somewhere we don’t necessarily expect—where it’s not actually about programming. You’ve been predicting AI will be used for more creative use cases for a while, and I wonder if that’s where something emerges.

The caveat is what you said: Claude sort of invented this general agent by being good at programming. So it’s hard to say. But I do wonder whether the focus on programming leaves some of these companies vulnerable to competitors coming from other places—because they’re so focused on programming right now.

Reid Hoffman

Well, I definitely think programming is part of the architecture for getting everything else.

For example, part of the reason coding is important is that even when we get to: Hey, how are you going to have a much better paralegal?—or what you’re doing: better medical assistant, better tutor, et cetera—I think coding will actually be not just the amplifier, but the fitness function.

How do we know this is getting better? How do we measure that it’s amplifying work better? The foundations of coding—driving planning, longer work, parallelization, orchestration—those patterns apply. And the way you build a better legal document workflow will also come out of that.

And I think some of that will show up in creative work, too. It wouldn’t be surprising to me—obviously a number of people are trying to figure out: how do we take Sora and go? Can we create a 30-minute movie off it? And the coding pattern will be part of what happens there.

Now, some of the more interesting possible surprises are different. For example: could we get raw ideation better—better at science? Like: we read a whole bunch of science papers and we can generate scientific hypotheses.

And then you begin to say: maybe that becomes true of AI research itself—idea generation in this domain. There are a whole bunch of projects trying to work on that.

So the notion is: if you can think a lot better, you can apply that to creativity and new ideas. Those are much more speculative.

It’s an interesting hypothesis. There are people who hold the view: Hey, we’ve seen scale, learning, and compute—and it’s going to happen. And I’m like: look, it’s crazy, and anyone smart should assign a non-zero probability to that, because it would be hugely amplifying.

But on the other hand, it’s not clear we’re yet seeing any of that.

Even when you see someone like Terence Tao saying, Hey, I’m using generative AI to help me understand where I should be thinking in my math analysis—yes. A hundred percent. But of course, Terence Tao is one of the most genius mathematicians of our age, and he’s providing a ton of metacognition in the process.

Dan Shipper

That makes sense. I’m going back to your comment about no one stumbling, and I’m trying to imagine: if there was a stumble, who would it be?

My current feeling is: I would guess Cursor. That’s probably the highest likelihood. Not that they go away—they’re obviously going to be a successful company and all that—but I think they’re caught a little bit in the same position that OpenAI is in.

But OpenAI has more flexibility. Cursor has a lot of its business built on traditional developers using IDEs inside big companies—with AI on the side. And they’re caught between that paradigm and this totally new Claude Code-type paradigm. They have to do both.

And I think that’s going to hamper their product direction and velocity in a way that—if we look back in a couple years—we might say: that was an interesting era. It’s still a widely distributed piece of software, but it’s not the next-generation thing that we thought it was.

Reid Hoffman

That’s—I agree. And that’s one of the reasons I brought it up earlier, because I’ve been thinking about that too. It’s hard.

Another angle is: how are we going to integrate not just the application functionality—the UI—but the underlying model and compute fabric capabilities?

Cursor is just beginning to do that stuff. And what the shape of that is—and whether it has to be dual-targeted, as you mentioned, or multi-targeted—makes it a harder slalom race for them.

Dan Shipper

I think the narrative right now is that enterprise AI deployments are not doing as well as people hoped.

What do you think the narrative will be in the enterprise by the end of 2026?

Reid Hoffman

Well, I think for sure there will be some intense usage. And the one I’ve been predicting—where I think a lot of enterprises will need to get out of their own way—is using AI to amplify coordination: meetings, workflows, all of it.

The obvious thing to do now is to record every single meeting and run agents on it—not just to transcribe, but to say: Who in the organization should be notified about this? Who should be asked about this? What are the action items? What follow-ups should happen? Teams of agents should be working on this—preparing for the next thing. What should the briefing be for the next meeting? All of that.

And I think people aren’t doing it because they’re like: Well, shit—I’m worried. Does it create legal liability? We’ve never recorded everything before. Someone makes an off-color joke—does that become a problem?

But I think part of the unlock is using agents for that, too. You can go: I’m worried about legal liability—here’s the legal-liability-check agent. You scrub anything—or change anything—that you think is actually an issue.

So yes: I think it’ll be much more intensely positive. And it’ll be positive because we’ll have two groups of things in real deployment.

One is: by the end of 2026—let me say this more crisply—if you’re a company that wants to be a thriving, growing concern, evolving with the times, you will need to be recording every single meeting and using agents on it to amplify your work process. And by the end of 2026, if you’re not doing it, that’s because you’re making excuses.

It’s a little bit like saying: Cars won’t be a big thing. We can keep doing horses and buggies.

That’s one.

And two: you will start systematically deploying groups of agents to solve various problems. And that’s part of the reason I tend to think the next big thing is orchestration—because it’s groups of agents doing things.

I don’t think it kicks off in Q1, per se. But we’ll grow through 2026. And whether 2026 is the orchestration year or 2027 is the orchestration year—that’s why I have a high conviction prediction there.

Dan Shipper

I totally agree with you. It’s so clear to me that agents are going to reshape how we think about company operations.

One of my proof points is just internal: we did our 2026 planning with an agent. And we’re about 20 people now, so it was the first time we had to do a real planning exercise—every department, budgets, all that.

Brandon—our COO—made this agent. It has access to all of our Notion and all of our data. Anyone in the company who’s a leader talks to the agent. It asks really interesting questions: How does this layer up to the overall company strategy?—which it has access to. What resources do you need? Here are some tough questions to think about. Here are decisions you might need to make.

And now we have this Notion page where every single department has a really crisp, really clean strategy document that someone has gone through. It ladders up into the overall company strategy.

(00:40:00)

Dan Shipper

And then you can do all these amazing things, like…

The first thing I did was have Claude tell me: who’s not talking to each other that should be talking to each other? And it found all these strategy documents where I basically needed to get three people in a room together to sort something out.

Or another one: you do a strategy document, and then you forget about it in Q1. You’re making a decision and you forget the overall strategy—what you said you were going to do.

So one of the things I’m going to do over Christmas is: we have this Claude Code-in-a-trench-coat running in our Discord—our internal chat—and it’s called R2C2. I’m going to have R2C2 listening in. And anytime we’re making a decision, I can tag it and be like: Hey—how does this layer up to the 2026 strategy for this department and the whole org? What would you think about it?

It’s a way to make those documents more alive—more woven into the everyday way you make decisions. And I think that’s so important and exciting.

Reid Hoffman

Yep. I think that’s exactly right. And that’s the broader version of coordinating around meetings: how does the coordination of the meeting relate to strategy, changing conditions in the market, changing conditions with competitors, et cetera.

This is the tangible substantiation of what AI means: you have intelligence at the scale and price of electricity. Previously, you had to be extremely selective about where you applied intelligence, because intelligence was always high-priced human talent—which, by the way, I think will continue. But now you go: look, let’s apply it in all these other places as well.

Dan Shipper

Yeah, totally. And once you have that “free intelligence,” you can put the information everyone needs to consume into lots of different formats.

Like, we have a vibe-coded 2026 strategy app that people can click through. We’re going to do a podcast. There’s all this stuff where it’s like: you don’t want to read this long document—just listen to it on your run. It helps make the whole company get on the same page in a new way.

Reid Hoffman

Yep. I know exactly.

Dan Shipper

Okay. AGI timelines. Are we going to hit AGI in 2026? And if not, when are we going to hit AGI—depending on whatever your definition of AGI is.

Reid Hoffman

Well, you have to start with: what is AGI? My usual joke is that AGI is the AI we haven’t invented yet.

Each year, we’re not going to “hit” it because, in one sense, we’ve created AGI already. If you say AGI is: you have a variety of tasks where the AI is substantially better than your average human—the answer is already yes. For example, in writing, AI is better than most human beings at writing in various ways.

Now you say: the good writers? The good writers—it’s more mixed. Although good writers should be using AI to amplify themselves, et cetera. And there are a bunch of areas where it’s already super-intelligent. It has a breadth of knowledge. It has an ability to work at a speed human beings simply can’t.

If you say, Hey, I’d like a report on this, or I’d like to understand this thing, it can work at a speed a human being can’t—which is part of why it needs to be used in… in practice.

We’ve always had speed multipliers—planes, cars, et cetera. This is cognitive, so it’s weird and new and all the rest. So I think we have forms of superintelligence already. We have forms of AGI already.

So then the question becomes: what’s the definition for what will be 2026?

I think what we’ll see more of in 2026 is a combination of parallelization, longer workflows, and orchestration—which is part of the reason I like getting to a clearer realization of what agents are. I think we’ll see much more of that.

I don’t think we’ll have the press-a-button, fully human-capable software engineer—I’m ready to do the thing you asked me to do—which is the sci-fi version people are looking for.

But I do think you’ll see much more of: I come in as a human engineer, and I’m only really capable because I’ve got my team of agents and tool set that I’m deploying. And the way I do it is not just looking at suggestions for what to include in my code, but—as you were mentioning—setting this one and this one and this one and this one.

Part of what I have agents doing is cross-checking each other’s code. So I’m running a bunch of it where I actually haven’t looked at it—partly because if something breaks, then I’ll look at it. Or I’m expecting my coding cross-check agents to say: Hey, you might want to pay attention to this, and then I’ll go look at that.

That’s the kind of “AGI” we’ll have applied to a broader range of topics. It’ll be doing real work in a more broad sense than just the coding amplification we’ve had.

Dan Shipper

If we listed out the Holy Commandments of AI—thou shalt always scale compute and data, or thou shalt always align your models and make sure they do exactly what you expect them to do, as much as possible—and there are probably more…

Which holy commandment do you think will need to be broken—or will turn out to be misapplied or irrelevant?

I’ll give you an example. I feel like the way we do alignment has created models that are sycophantic—people pleasers. They do what we want them to do, more or less.

And if you really want a good engineer, I think we’re going to find that allowing models to have their own opinions and values and desires—distinct from humans—is actually an important part of creating models that can do more in the world and be more autonomous.

The tradeoff is: they don’t always do exactly what you want. And that’s a new thing we’re going to have to get used to. That feels against the received wisdom of how you should build AI.

Reid Hoffman

Yeah. Obviously that’s tricky, because you don’t want them to—like the old paperclip problem—be misaligned in ways that are serious.

You don’t want: I know what you want better than you think you want, and what I delivered is better—that’s what you want.

You don’t want: What I really want to do is strip-mine your… erase your hard drive.

Or: I think what you really need is more time outside, so I’m going to lock you out of your computer and devices for the next three hours to make sure you go get that time outside. And you’re like: no. Don’t want that.

So that’s tricky.

I would say… It’s interesting. My head has mostly been wrapped around what it means—it almost goes back to that iconic Marvin Minsky book, The Society of Mind—as tribes of agents.

So I tend to think about how you get “opinionated.”

Dan Shipper

You set up agents that are deliberately debating opponent processors.

Reid Hoffman

Yes. Opponent processors. And that’s part of how you solve things—and part of how you get more variation.

I guess I would tend to think you’d still want the orchestrator not to be sycophantically aligned, but to have a very good sense of what you’re trying to do. And even if you’re fuzzy about it—or wrong about it—it’s helping you get better at that, as opposed to ultimately going: Well, I’m going direction X when you think Y.

So I’m not sure I buy into the orchestrator thing the way you do. But I guess what I might say is: an interesting question is where the notion of…

This might be one—and I’m a little worried about this one, too. So even giving this one, I’m not sure I would want it to be exact.

(00:50:00)

I’m going to clean this whole closing section into your exact transcript style—speaker lines, readable paragraphs, tightened disfluencies—while preserving the meaning, jokes, and key metaphors (Talmud scholars, interpretability, “move 37,” etc.).

Reid Hoffman

It has a similar shape. Right now, we have a very natural instinct to say: we want as much interpretability as possible. One of the sci-fi worry cases is that agents start speaking in languages to each other that we don’t understand—and what does that mean? Does that become further out of control? Does it drift toward paperclips? Those are good questions and should be taken seriously.

I don’t have the 10-out-of-5 fire-alarm version of them, but I do think it’s an important risk—something that could go seriously wrong and is worth paying attention to.

Now, maybe the thing is: what we want is speed of coordination between agents—communication and learning. And what might be tolerable, and allowed, and shaped in certain ways is similar to what we already accept with generative AI models: I can’t look under the hood and prove it’s not paperclipping the world.

That may also be true of the comms fabric—how they’re coordinating—because I want the speed of coordination and learning between them to be so high that I’ll accept a lack of interpretability there.

And that’s super scary in some ways. So I’m not saying it lightly. I’m saying it as: how would we shape it, and what parameters would be okay? But I do think we’ll tend in that direction. So that would be one area where a commandment changes.

Another might be: don’t do self-improvement. Don’t allow systems to self-improve. And yet, in many ways, we are doing forms of self-improvement—not just in data and modeling, but in coding, in wrapping back, and so forth. That’s going to continue in certain shapes.

So the question becomes: what shapes are okay, and what shapes are not okay? That’s where the commandments change.

Dan Shipper

Yeah. We’re going to have to do some legalistic interpretation of the commandments.

So all of our Talmud scholars are going to be newly employed as AI researchers. I love that.

And I think that’s right. The first people to take the risk—to be like, you can communicate in ways we don’t understand—there are so many gains to that. And it’s so anathema to AI safety that it’s really been a commandment. But I bet there are ways to define boundaries that are safe.

Reid Hoffman

We’ll need to work on making the boundaries safe. But I think that will happen.

Dan Shipper

One thing—going back to the previous point—about AI that doesn’t do what you say: my contention is that it could be really useful for autonomy, for doing interesting things we wouldn’t predict.

And I think your contention is: that’s a horrible user experience.

One way to square that circle is: once you have an orchestrator that is aligned with you—and you trust—it’s okay if the orchestrator is using an agent that’s a pain in the ass.

Because it could be like: I don’t care what you say, orchestrator. I’m going to go off for three months and do this thing. And the orchestrator is like: Oh, fine. I’ll get most of it done with this other set of agents that follow instructions. But this one is off doing its thing.

And every once in a while it comes back with something brilliant, and that’s actually valuable. A good enough orchestrator lets us move in that direction, because the human doesn’t have to deal with the bullshit.

Reid Hoffman

That’s what I was gesturing at. That’s why the orchestrator needs deep alignment. But the orchestrator might have agents that are like: I think everything you think is bozo, and I’m going to go try something else.

Okay—go ahead. Don’t just go do it. Bring it back to me. Go research it.

Dan Shipper

Okay. We’re almost out of time, so I’ve got one last question.

What is the most important undersung category in AI that we’re not talking about right now, that we’ll be talking about at the end of 2026?

And I want to put some restrictions around this. A couple categories that might come to mind are robotics or science or something like that—but I want to get more specific. I want a really specific, concrete reason you think the thing will be valuable and important—and something we’ll be talking about a lot in 2026.

Reid Hoffman

I’ll choose one that’s a little… it’s close to me. Not really self-serving, but I’m close to it.

Right now, the vast majority of what we’re doing is extremely close to human language—either human language itself, or coding.

I think we’ll be doing a lot more in-depth modeling of things that are not close to human language. For example: biology.

Part of the reason is the work we’ve been doing with Manus AI—with C.T. Muji and Singh—and the fact that it’s a frequent trope to say biology is a language. That’s one of the reasons I’m focusing on it.

If you think about a world of atoms and bits, bio is not fully atoms and is closer to bits—and it has a programmability, compute characteristic to it. Exactly how it’s compute is still a little TBD. You get people like Penrose arguing what’s unique about human cognition is quantum computing effects. The borderline between being able to simulate quantum and genuinely quantum—what comes of that—those are all interesting questions.

But I think what this resolves to is: generative AI model-building—data, prediction, everything else—will increasingly be applied to computational sets or “language sets” that are further afield from human language. And biology is probably the most natural place where that shows up.

And obviously I’ve been working on that and thinking about it intensely—because of Manus.

Dan Shipper

And what’s the big concrete impact that will have in 2026 that will cause us to be talking about it a lot?

Reid Hoffman

The one we’re going for is amazing new biological therapeutics—or new understanding.

I don’t know if 2026 will be the full hit there. There’s a probabilistic curve. But it wouldn’t surprise me if you get the equivalent of a move 37—something around biology.

Maybe it’s a molecule that makes a massive difference. Manus is trying to cure cancer, et cetera. Maybe we discover something that’s not like what we expected.

What I would hope—maybe reasonably high probability—is we discover a research possibility: Oh, this might be one of those things. Like: there’s a 27 percent probability this is a move 37 in this arena. And maybe that’s 2026.

Dan Shipper

That would be amazing. Reid—always a pleasure. This is so fun.

Reid Hoffman

Likewise, Dan. I look forward to seeing you in the new year.

Dan Shipper

Sounds good.


Thanks to Scott Nover for editorial support.

Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to [email protected].

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!