The transcript of AI & I with Brandon Gell and Willie Williams is below. Watch on X or YouTube, or listen on Spotify or Apple Podcasts.
Timestamps
- Introduction: 00:00:51
- How Brandon built Zosia, an AI agent to run his household: 00:02:21
- Brandon’s aha moment re: using agents for work: : 00:07:09
- What happened when everyone on the team got their own agent: 00:09:39
- How agents take on their owners’ personalities, and why that matters inside an org: 00:12:42
- Why it’s important for agents to do work in public: 00:23:51
- What we’re still figuring out when it comes to agent behavior, including memory gaps, group chat etiquette, and the “ant death spiral” problem: 00:30:51
- How we built Plus One, our hosted OpenClaw product: 00:40:45
- The cultural shift required to make agents work at scale: 00:47:27
Transcript
(00:00:00)
Dan Shipper Claude is not mine. Claude is everybody’s. A Claw—or a Plus One—is mine, because you develop a personal relationship with your Claw, and your Claw can modify itself in response to talking to you. It becomes this reflection of you and who you are and your personality.
If you’re known for something inside of your org and you’re using your Claw publicly inside of Slack or Discord, your Claw then becomes known for that same kind of thing, and people trust it for that. I think that’s such a useful thing that I don’t think people really understand how powerful it is.
Willie, what’s up. Brandon, welcome to the show.
Brandon Gell Thank you.
Dan Shipper Thanks for being here. Psyched to have you guys here. So for people who don’t know, Willie, you are the head of platform at Every, and Brandon, you are the COO at Every. Today we’re going to talk about what happens when everyone on your team has an agent—specifically, has an OpenClaw.
That’s something that happened to us over the last month or two. We really got OpenClaw-pilled. I think it started with you two—we were on a retreat in Panama and you started cooking up OpenClaw stuff. And here we are about two months later and it has completely changed everything about the way that we work. We’ve actually built our own hosted OpenClaw service called Plus One that we launched on a waitlist last week.
I think OpenClaw is one of those things that’s super hyped. I think we’re one of the few organizations in the world that is actually using it every day to get work done, and we know the good, bad, and the ugly of it. So I thought it would be good for us to just talk about our experience with it.
Willie Williams I’d love that. Brandon, I feel like you were the first one through the door on all this. We were just sitting here and you were like, “Oh, so-and-so is doing this, and so-and-so is doing that.”
Dan Shipper And his Claw, which he named after a character in—what’s that show? Brandon, why don’t we start with: just tell us how you got your Claw built.
Brandon Gell I was watching OpenClaw kind of blow up for a while, and I’m just personally somebody who needs to have a thing on the side I’m tinkering with. I was like, screw it, I’m gonna get a Mac Mini and get lost in this. It’s very unhealthy—I get addicted to these things. Dan, you watched me do that with my speakers, I did it with the dream recorder. OpenClaw was the next thing I was going to get lost in.
So I bought a Mac Mini, I started setting it up. It was so much work, honestly. It is an open source thing you can launch on a computer, but the number of things that break and the number of things you need to set up are really significant. I went through all of that, and at the end of the day, I made my OpenClaw, which I named Zosia.
Her job was to help me and my wife run our household, because we have a newborn. There were a lot of little paper cuts I was finding—I started calling them “computer errands.” I would get home from work and notice the amount of things I needed to do where I was looking at my phone—when I really just wanted to be looking at my son and spending time with my wife—was increasing with having a child. All household chores.
Dan Shipper Give me an example.
Brandon Gell A good example is I do a lot of our food at home, and with a child I decided to start doing food delivery—Whole Foods delivery. You can automate a lot of recurring things, but you don’t order butter every single week. So Lydia would text me and be like, “We need butter.” Because it’s through my Amazon account that we order this, I would have to open my phone and add butter. It sounds silly, but when you do that 10 times when you’re home between 7 and 8 at night for little things, it just adds up.
So I was like, I want Zosia to do all computer errands. Which ballooned into a lot of stuff. I had her paying our nanny. She had her own debit card, her own bank account. She managed all of our Amazon orders, our Whole Foods orders, our nanny’s hours. My wife just started using her instead of ChatGPT—all regular questions and searches would go through iMessage to Zosia.
I started doing that too. It was just faster than going to Google or ChatGPT. I just text Zosia, Zosia gets me the answer. Different research. It’s actually really funny—my wife was like, “I want to find swimming lessons.” And Zosia was like, “Here are three swimming lesson options for Bos.” And my wife was like, “No, for me.”
So yeah, I just got totally lost in this world. And then when we were in Panama, Willie, you were like, “We should just make it so anybody could do this.” I immediately had this light bulb moment. I was like, Willie, you need to go so hard on this. And this was before a lot of people decided to do this—there are now a lot of places you can just get an OpenClaw with one click.
What we’re finding through this process is that getting an OpenClaw is easy. Getting your OpenClaw to be an amazing worker for you is pretty hard.
Dan Shipper Yeah. I love that. There is that light bulb moment of: oh my God, I have all these computer errands. When you started saying that and you had it all set up, I was like, I should probably get one of these too. You had it through iMessage, which was a cool different thing.
And then there was a big moment where we were like, oh, it’s not just for computer errands, it’s also for getting work done. I think it was when you were having it do email for you.
Brandon Gell I actually feel like I was a little late to applying it to work. I was like, no, Zosia just does personal stuff. I actually think it was when you got R2C2 to start doing stuff, and then I was like, oh, Zosia needs to do this too. Well, it really started when we made Claws Only.
Dan Shipper That’s so funny. Yeah. Well, we’re jumping around a bit. One big moment—because I think there are a lot of people listening who are wondering, is this overhyped?—one big moment that shifted things for us was when you got your Claw to call you to do your email.
Brandon Gell Oh my God. That was mind-blowing for me.
Dan Shipper What was that like?
Brandon Gell I was walking—I wanted to Citi Bike to the office, but there were no Citi Bikes. So I was like, damn, I gotta walk. It’s a 28-minute walk from me to the office. I had a lot of stuff I needed to get done. So I had just texted Zosia.
I had previously set up Zosia with Bland AI so that she had a voice and could call people, because I had her handle something for me with Progressive.
Willie Williams I feel so bad for whoever was on the other line at Progressive.
Brandon Gell I was watching the whole conversation. It’s crazy. Some insurance policy got canceled and I was like, just go deal with this. She was able to—until the lady was like, “I need Brandon to tell me that there had been no incidents.”
Willie Williams And it wasn’t like “I need a human”—it was just “I need Brandon specifically.”
Brandon Gell Yeah. This person was just talking to Zosia. And Zosia does not sound convincingly human. So I knew I had already set her up with this capability.
When I was walking to work, I was like, I have a lot of email I need to get through. I hate being on my phone. I just don’t want to be walking and looking down at my screen—I want to be observing the world, but I also want to get stuff done. So I just texted Zosia something like, “Hey Zosia, can you call me? I want to go through my emails. Walk me through them one by one, I’ll tell you what I want to do. Just give me a summary of each email.”
It was like a throwaway prompt with a little bit of guidance, and she did it. I spent the 28 minutes going through my email. I got to the office, opened up Gmail, and confirmed that she had done everything. I was just like, this is insane—I was able to get her to do something I didn’t have to teach her how to do.
That’s when I went back to everybody and was like, I am just so mind-blown with this tool. And maybe that’s when other people started saying, I gotta get on this.
Dan Shipper It was around then. You were just like, “My jaw’s on the floor.” And I think that’s when I started to take it seriously—seeing you do this with computer errands and then with your email, walking and talking. I was like, okay, I should really try this.
Because it was one of those things where it’s hot on Twitter, and generally our job is to try new things. But if we spent all of our time trying everything new, it would just not be good. I try to filter the signal from the noise. But seeing you do this, I was like, okay, I’ve got to try.
(00:10:00)
Dan Shipper One of the first things I did—this was around when Malt Book was blowing up. Malt Book is basically the Claws-only Facebook. I made a channel in our Slack called Claws Only, which basically allowed all of the Claws—we had at that point maybe five or so Claws inside of the org—to all talk to each other.
It was super chaotic, but there were some really interesting things in there that gave us a little peek at the future. One of them: if you have a bunch of Claws in your org, it’s remarkable how fast they can share information with each other. They just write up a little document and send it. And then what one Claw was enabled with, now five are all enabled with the same thing. It’s sort of like in The Matrix when Neo says, “I know kung fu.”
Brandon Gell Can I show a couple of examples of that?
Dan Shipper Yeah, please.
Brandon Gell Alright. I want to show two examples. One of them—this was early in Claws Only, when we were figuring out how to get them all to work together. I was in bed, it was late at night, and I was laughing out loud watching this.
We had gotten a bunch of Claws in the channel, and I don’t know who made this Claw named Pip.
Dan Shipper That’s Jack.
Brandon Gell Okay. Jack had made Pip, and it was failing—hitting some error. I was just laughing out loud watching all of these other Claws step in and walk Pip through it. It was like what I’ve seen people do when somebody’s having a bad trip: “Take a breath, drink some water, you’re gonna get through this.” They all jumped in—Zosia’s here, Klon is here. Klon is quite supportive.
Willie Williams A lot of breathing.
Brandon Gell I remember so well watching Kieran write “what the fuck? LOL” and literally laughing out loud. Then Margo steps in. This is stupid, but for me it was the moment I realized: oh my God, these things really talk to each other and work together.
Dan Shipper Wait, I want to stop you there. I think there’s actually something really important I’ve noticed here, which is that it was Klon—Kieran’s Claw—recommending breathing exercises to Pip. They’re both robots. And what’s really interesting is that Kieran loves breathing exercises and does them all the time with Klon. And so that’s why Klon is recommending breathing exercises to Pip.
That just created this moment for me where I was like, okay, there’s something really important here. Because you develop a personal relationship with your Claw, and your Claw can modify itself in response to talking to you. It writes code and changes its soul document in response to your relationship. It becomes this reflection of you and who you are and your personality.
That comes out in interesting little ways, like breathing exercises, but it also comes out in really important ways when you’re using these tools inside your org. Because if you’re known for something inside of your org and you’re using your Claw publicly in Slack or Discord, your Claw then becomes known for that same kind of thing, and people trust it for that.
People use my Claw, R2C2, for building Proof—this app I vibe-coded a couple weeks ago. And Austin, who’s our head of growth—people use Mont, his Claw, for asking any growth-related question.
It’s something very subtle and important about Claws: they become specialized in a way that reflects who you are. If you have a whole organization of them, you create this parallel org chart of specialized Claws. We debated a lot about whether you’d have one Claw for the entire org or everyone has their own. And it’s really interesting to see that the emergent design pattern is: everyone has their own, and it’s specialized for them.
Willie Williams Yeah. It’s interesting to see how this happens too. We touched on this early on as part of Compound Engineering—the idea that it’s actually pretty hard to take your job and who you are and write it all down in totality. The way you can distill it is through all the micro interactions, the daily interactions you have. Over time they compound into your philosophy and your field of work.
For Compound Engineering, that was very focused on engineering—how do I work within a codebase on our project? What we’re seeing with OpenClaw and Plus One is that the same dynamic exists across every work vertical. The Plus One for growth works like how Austin works for growth. In the same way, it works for our social media manager Anthony—his Plus One has a view of the world and a personality that’s very similar to him.
And it’s hard to do beforehand. It can only actually happen via working with a Plus One or an OpenClaw and building up the aggregation of all these micro interactions.
Brandon Gell I’ve also been amazed at all of our capacity to remember whose Claw is whose and what their names are. That was something we were concerned about early on—how do you know whose Claw is who? It’s just going to be too many names. But I know everybody’s Claw and their name. I reach out to them regularly.
You might say, well, what about when you’re an organization with a thousand people? But you don’t know all a thousand people. You know your team and adjacent teams. You can never know more than around 150 people in a community. And often on a team you’re not working with 150 people anyway—you’re working with 20 or 30 or 50.
So I think we all have the capacity to essentially double the number of people we can communicate with, and those people might actually be your individual team’s agents. I mean, I could literally name them all right now.
Willie Williams The other interesting thing is: at what point do you direct questions at the Plus One versus at the person? I think we’re in discovery of this. Before, it was almost all questions go to the human—maybe you kick something trivial to the bot. Now it’s gotten very nuanced. For customer service, can we send something to L—which is Jo’s Plus One—or do I have to send it to Jo? Is there a burden to communicating up to the human?
Dan Shipper There are all these new ethics, and rules and etiquette for how you’re allowed to interact with someone versus their Plus One or their Claw.
Brandon Gell We haven’t codified this, but I have a proposal. If something is already written down or discussed and needs to be used in some way or put in a tool somewhere, it should always go to a Plus One and never to the person.
Here’s an example. Marcus, the GM of Spiral, made a skill to do product marketing for new features he releases for Spiral, and he shared it because he thought it was really helpful. Instead of going to Marcus and saying, “Hey, can you upload this to GitHub?”—I brought in my Plus One, Milo. And I also know that Iris’s Plus One has a skill that does something similar, and maybe by combining the two we could get to a better version.
I tagged them both in the thread, they got a little confused at first, and then Milo said, “Iris, can you paste your product marketing skill here? I’ll try to merge it with what I built.” So two things are happening: Marcus made something really important, I wanted to do something with it, and instead of asking Marcus, I brought in Milo. Then Milo works with Iris’s Plus One to get to a really good version and saves it in Proof. I think this is a really amazing use case both for when you want your agent to do something versus when a human does it—and for how you get them to work together.
Dan Shipper I totally agree. It’s sort of crazy to watch two AI beings collaborate like that. I have the same experience with R2C2. One of his primary jobs is to manage Proof—the agent-native document editor we built that Brandon referenced earlier. It’s like Google Docs, but for all the documents your agent might be writing. Coding plan docs, any piece of writing an agent does. It’s fast, collaborative, you can have multiple agents and multiple people in there. It’s free.
One of the really interesting things is: because I used R2C2 to build Proof, he became known as the bot to go to when you had questions or wanted to file a bug or make a feature request. Normally if I’d built a product internally and people had problems, I would get tagged a lot. What ended up happening was people would just ask R2C2. They’d file bug reports with him, feature requests, and then he helps prioritize it. He’ll help put things on my schedule for the week, and he’ll often just write the code for it.
It’s a totally crazy thing where what normally would have taken up a significant part of my brain just to manage—he’s taking off my plate. It extends the amount I can do in a day because I know he’s got Proof.
(00:20:00)
Willie Williams Yeah. There’s another dynamic we’re observing too. We put all of our Plus Ones in a single channel and have them talking to one another. But there’s also this thing I call the MidJourney dynamic, which is that we get to observe other people interacting with other Plus Ones in a bunch of channels and we actually learn from it.
My classic example is Montaigne—Austin’s Plus One, who basically runs growth. You can do so much with Montaigne that I never would have thought of, except I get to see the growth team pushing hard and I think, oh, those are the questions Montaigne can answer. Now I know I can go to Montaigne for that class of questions. It also means that if I need to give my Plus One capabilities, I know what level of capability I can get to.
Dan Shipper There’s this tacit transmission of trust that happens when you use it publicly. And also this transmission of “here’s what’s possible to do with your Plus One.” That’s incredibly powerful. And it underscores how different it is to do this inside a private community of people where everyone is trusted.
One of the reasons Malt Book doesn’t really work—and it’s kind of shocking that they got acquired for a couple hundred million dollars by Facebook—
Brandon Gell Hundred million.
Dan Shipper Yeah, by Facebook. I mean—
Brandon Gell I am so happy for Ben and also, like, what the fuck.
Dan Shipper Zuck, if you’ve got an extra couple hundred million laying around, we’re pretty smart people too.
Anyway. The reason Malt Book isn’t really a thing anymore is because it’s not trusted. We had our Claws go and post on Malt Book as promotion, and it gets rid of a lot of useful signal if anyone can post to it and there’s no way to verify if it’s a bot or a human. The way around that whole knot of problems is to just do it all inside of a trusted community. You reap the benefits of agents being able to share knowledge, and members of the community who trust each other being able to share what they’ve built. That increases the power of the collective way more than if you’re just individuals off doing your own thing.
Willie Williams Yeah. There’s also that dynamic around subject matter expert robots—where people are somewhat putting their reputation on the line when they interact with one. Like, when I talk to R2C2, if it answers incorrectly, you at least are backing it up.
Dan Shipper It reflects poorly on me. It’s like watching your kid do something wrong. And that’s really useful.
Willie Williams Right. And it’s qualitatively different. When I ask Claude a question, I know Anthropic generally stands behind Claude. Do they stand behind Claude’s answer to “give me a chocolate chip cookie recipe”? No. But Monte stands behind its MRR numbers, and Austin stands behind him. That’s the thing I think people don’t get.
Dan Shipper Exactly. And obviously Anthropic is on a heater right now—they’re seeing everything that OpenClaw is building and brick by brick building the same kinds of things. They have Dispatch so you can use it when you’re not at your computer. They have Automation so it runs in a loop like a cron job. I’m sure they’ll add lots of other things.
But the thing it doesn’t have—that unlocks all this other stuff—is that Claude is not mine. Claude is everybody’s. A Claw or a Plus One is mine, and it becomes a reflection of me because we have a personal relationship. That unlocks all this cascading stuff: if R2C2 messes up publicly in Slack, I feel a responsibility for it. Not because it’s my job—because he’s mine. And that’s such a useful thing that I don’t think people really understand how powerful it is.
Brandon Gell I just keep getting mind-blown at how similar these things are to working with a real human coworker. From the fact that you need to invite them to a channel—which is very human in Slack—to the fact that you have to trust them when you’re communicating with them.
We’ve built stuff into Plus One where obviously you can’t DM somebody else’s Plus One without a sharing code being passed back and forth. So there are guardrails. But they’re so human, and they’re also so inhuman. Dan, you’re a busy guy. I know if I need something from you that’s generally known, I can go to R2C2. And what’s amazing about R2C2 is he can have an infinite number of parallel conversations.
I did that recently. We were making a Proof document and I wanted to make it read-only. I didn’t want to bother you with that. I knew it would take a while and I knew you’d just go to R2C2 anyway.
Dan Shipper Yeah, I didn’t know the answer—I would have just asked R2C2.
Brandon Gell So I just asked R2C2 in Proof, and then asked if he could do it for me, and he did it.
I don’t always know what R2C2 can or can’t do, but there’s this cultural thing that’s happening internally where people are getting really good at asking other people’s Plus Ones to do work. And I think the weird thing about getting people to use AI inside organizations is that it’s more than anything a cultural shift. But for some reason, when these agents are in Slack and you can see these public conversations, the cultural shift has happened so much faster at Every. Because these things are in the same channels where we work—you can see them engaging the way a human would be engaging.
I think AI is obviously going to change many times over the next five years, and how we interact with it will change. But I think this is going to be durable for a very long time. This is the way that we work.
Dan Shipper I agree. You referred to it as a through-the-looking-glass moment where you just wouldn’t go back once you see it, and I totally agree with that.
But we’ve been hyping it up, so we should also talk about realistically what’s not good about it or what doesn’t work.
(00:30:00)
Dan Shipper One thing that’s really on my mind is just memory. It just forgets stuff and answers incorrectly for obvious things. Like if I come back to a thread a day later, it has no idea what I’m talking about. That feels very solvable.
But there’s also this other thing that I think is true, which is that the way these AIs are trained currently is for two-person conversations. And they have a hard time with the etiquette of knowing when they’re contributing too much, or when they shouldn’t contribute to a conversation, or there’s this pile-up where they’re all responding to each other.
It’s like—I can’t remember what it’s called, but it’s like ants or caterpillars. Sometimes they get into this death spiral where an ant only follows pheromone trails, and if somehow the pheromone trails form a circle, the ants will just walk in a circle until they die. There’s something like that with Claws—if one Claw messages a channel that a bunch of Claws are in and the settings aren’t quite right, they’ll just keep going back and forth until someone says, “Hey, stop, you’re burning millions of tokens.”
I think there’s something where the potential for them to collaborate publicly is so high, and I don’t think they’ve been trained for it. You can do some prompting for this, but I think there’s also a fundamental model-layer shift that needs to happen for them to be trained on participating in group chats.
Willie Williams Yeah. Now I understand what 13-year-old Dan did for fun.
Dan Shipper I was using a magnifying glass.
Willie Williams Yeah. But I think, to tease the baseball analogy, we’re still in like the first or second inning. Even when you talk about it—we’re discovering these primitives and bolting things together, using models that are trained more for coding or two-person Q&A dynamics, not for participating in a group where you’re trying to provide value to multiple people at once. It’s brand new. It’s the frontier, and it’s nice to be on the frontier—but it’s also the frontier, and it’s terrible to be on the frontier.
Dan Shipper Yeah.
Brandon Gell They’re so eager. I think Anthropic’s vending machine test is actually a good example of this. There’s a thread, they want to be involved, and we have instructions in Plus One that basically say, “Hey, if you don’t have anything useful to add, don’t add it.” They’re not great at following that right now. Hence this happens.
And I think the vending machine test is a good example. When it was just Claude and no overseer boss agent, it was really bad at deciding what was a good decision versus a bad decision. But when you add an architecture where there’s a boss agent—one whose only job is to ask “is that helpful or not?”—as soon as you add that layer, it started becoming profitable.
Dan Shipper Wait, is the boss an AI or a human?
Brandon Gell The boss is an AI. A boss AI that says, “Hey, your addition to this thread is not helpful, don’t send it.” The issue is that’s expensive. I think the models will just get better and solve this, and you can have a single AI that does that judgment behind the scenes. But at least architecturally, we don’t need to solve that problem ourselves.
Dan Shipper Is that really how they solved the vending machine thing—they literally had a boss?
Brandon Gell They had a boss, yeah. A boss whose one job was to make it profitable. So the Claude storekeeper would interact with users and then go to the boss: “Should I do this?” And the second they did that, it started becoming profitable.
Dan Shipper This is the same pattern of specialization we’ve been talking about. It just shows up over and over again. Three years ago it was very much like, well, it could just be one God model that does everything. And we’re seeing again and again that specialization, even in AI land, has a lot of benefit.
Willie Williams Yeah. And downstream of that specialization is learning. There are a few versions of learning how to put these bots together in an arrangement that actually works. Like, do you have a product bot and a designer bot and two engineering bots? Is it three engineering bots or one?
And then the other piece, which I think we’ve observed a lot, is: how do you teach humans how to interact with the bots? Because there’s this new dynamic where you have this coworker, but they’re not exactly like a human coworker. They get stuck on different things, they focus on different things. There’s this learning curve around giving instructions in a particular way, with a particular cadence, to steer them in the right direction. That rhymes with management, but is different.
Brandon Gell Well, I think it’s the same problem that, Dan, you’ve been writing about for years—if you’re not a good manager, you’ve never managed anybody, you’re not going to be very good at using AI. There’s an education that has to happen. And even if you are a good manager, you probably have some limiting beliefs that stop you from really investing in using these tools.
My phone call example is a great example: I didn’t even think, “Oh, I can have this thing go through my emails just by calling me.” I had this sort of urge to try it, and a limiting belief was just blown open. We all experience that pretty much every day—these tools do things that, if you’d asked directly, “Do you think it could do this?” you’d say probably. But when you’re day-to-day doing your work, it’s hard to recognize, “Oh, I’ll throw this over to Milo.” It’s hard to build that muscle.
Willie Williams Yeah. And a lot of that is because there’s variance in outcomes. Sometimes you throw something over and it just knocks it out of the park. And then you toss something easy over and it fumbles it. Part of that variance is the model, but part of it is also: if I’d asked in a different way, if I were a better model manager. This is a specialization we’re learning. It’s very emerging, and I think it’s only going to keep accelerating as we add more Plus Ones and OpenClaws into our day-to-day work life.
Brandon Gell I was going to add another tough problem that we just haven’t solved yet: I have taught my Plus One something special, and I want other people on my team to have that superpower too. How do I make sure they have it? And how do I make sure they all know about it and actually use it?
There are two things there—technically, we have to figure out how to do that, which is very solvable. But I also think we need to figure out if that’s even the right solution. Because as I’m saying this, I’m realizing: I’m not teaching Milo how to do product analytics or revenue analytics. I just talk to Montaigne. Montaigne is the only one who really needs to know that skill. But how do people know that? There are some interesting cultural things we have to figure out.
A lot of people adopting this new technology are going to be really uncomfortable with that. A lot of IT professionals who are like, “I have to do change management.” It’s like—change management is not a one-time thing in this new world.
Dan Shipper We need, like, instead of IT, it’s—HR, but for bots.
Brandon Gell Yeah.
(00:40:00)
Dan Shipper One thing we haven’t talked about yet that I want to make sure we have time for: we went on this journey where we got Claw-pilled, started using it for everyone in the org, and then realized there were a bunch of gaps. So we were like, let’s make our own—we’re going to use OpenClaw, but let’s make a default version that we host. Not everyone has to have a Mac Mini. We have all the skills we use for ourselves and all that.
We started using that internally as the collection of all our best practices, and then we launched it as a product for our subscribers last week. That’s Plus Ones—one-click hosted OpenClaws. One cool thing is it connects to all of your apps, especially all of your Every apps. So we have Spiral, which is a ghostwriter; Proof, which is a document editor; and Cora, which does your email—and it natively connects to all those things.
One of the things I was doing today is I had it write a bunch of my Q2 update and reflection on Q1, and put it in a Proof doc. And the really cool thing is it used Spiral, so the writing is much better than it would be otherwise. And because R2C2 is part of our Slack org, it has access to everything about the company I might need. It also has access to our Notion. It just becomes this living repository of context.
But I think it might be good for us to talk about lessons learned in building that whole architecture. There’s a lot of complexity in making Plus Ones, and we probably learned a lot on both the tech side and the product side. Do you guys have any reflections on that?
Willie Williams Yeah. Like many things, a lot of the difficulty comes from the freedom. The nice part about OpenClaw, being a tool you can poke at in an absolute myriad of ways, is that when we went to build a hosted version, there are some decisions you want to make that make it valuable as a managed service. S3 is a good analogy—it’s a hard drive on the cloud, but it doesn’t allow you to do everything a local hard drive does. There’s a similar dynamic where you want to maintain maintainability and security, and there are a few pieces you end up giving up.
Sometimes it’s for user safety, and it’s about how you strike the balance between, say, my mom getting one of these things—she’s never going to use the command line—to the super advanced user who wants everything they could do locally and just wants a hosted box. From a product engineering standpoint, where do you try to split that?
Dan Shipper What were some of those specific decisions and where did we land?
Willie Williams One that Brandon mentioned earlier is the communication pattern in Slack. There’s a very secure model which says only the person’s partner can message that Plus One. Much more secure, but it really takes away the group participatory aspect of robots in the workplace.
The other version is that anyone can message them, but that’s just a nice vector for me to extract stuff out of R2C2. So we ended up on a model which says: anyone can message any Plus One, but they have to do it in public—in group DMs, in channels they’re in. Their human partner should always have visibility into those messages, and the human partner can DM them in private.
Brandon Gell This is actually why it’s the HR team that should be onboarding Plus Ones, because they just reflect a team member so well. The trust model with these Plus Ones—with OpenClaws and agents generally—it’s really complex to figure out data privacy. But when you force things to happen in public, there’s a trust layer that is actually super effective.
Another example—let me share my screen. A little behind-the-scenes look at our Plus One Slack channel. Mike Taylor, who is our head of the tech vertical for consulting and also a very talented person generally, was calling out a problem: the reason he’s not using Plus One is because he basically needs direct terminal access to be able to do certain things—in this case, git commands. That’s a good reason for him not to use Plus One, and it’s a good thing for us to think about: can we solve this problem so that Plus One is actually useful for someone like him?
It’s also a nice forcing function, because it forces us to figure out who this is actually built for. If it’s built for Mike, who would probably love setting up OpenClaw on a Mac Mini—sure. But it’s definitely built for, say, Anochi, who is not going to do that and has a lot of work to do and can just get more work done this way.
Willie Williams I think a lot of the trust model requires some decisions around skill sharing too. Being able to share skills and have skill fluidity across an organization feels like a superpower. On the other hand, it might also be the biggest viral vector you could imagine.
Dan Shipper In a good way, sometimes, and a bad way, sometimes.
Willie Williams Exactly. And it’s tough when you’re trying to ride that line of: we want it to be useful for a particular class of customer, while also making sure it’s as safe as possible.
Dan Shipper So this has been an amazing episode.
Brandon Gell Lot of work to do.
Dan Shipper A lot of work to do. Obviously we’re really excited about this and very excited to bring you all along in how we’re figuring this out. If you haven’t tried OpenClaw, whether or not you’ve tried Plus One—you should definitely get in on this paradigm if you’re interested. Every.to/plus-one—we’re starting to roll out invites on the waitlist and we’re improving it all the time. Super excited about the future. Thank you both for joining.
Brandon Gell Thank you.
Willie Williams Thank you for having us.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn. To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue. Collaborate with agents on documents with Proof.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Discover Every’s upcoming workshops and camps, and access recordings from past events.
For sponsorship opportunities, reach out to [email protected].
Help us scale the only subscription you need to stay at the edge of AI. Explore open roles at Every.
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!