Was this newsletter forwarded to you? Sign up to get it in your inbox.
People are building personal AI agents that text them back, order their groceries, and write code while they sleep—all with an open-source tool called OpenClaw. If you spend any time on X, you will have seen these digital crustaceans—OpenClaw agents—running wild in recent weeks, joining their own social network, starting their own religion, and generally behaving like something out of the first act of a sci-fi movie about robot overlords.
A lot of the more sensational stories around these personal AIs turned out to be stunts and spectacle. But there’s a growing community of people who swear by their OpenClaw agents. The project has accrued more than 200,000 stars on GitHub, and its creator, Peter Steinberger, was recently recruited to OpenAI. If the labs are paying attention, we should too.
At our first OpenClaw Camp, we walked more than 500 subscribers through setup live and spent two hours with four OpenClaw users who’ve been running these agents daily for weeks.
The session featured Nat Eliason, entrepreneur and creator of an agent named Felix that has its own Twitter account, bank account, and crypto wallet. Brandon Gell, Every’s COO, demoed Zosia, an agent he and his wife use to track nanny hours, order groceries, and book date nights via iMessage. Austin Tedesco, Every’s head of growth, showed how his agent, Judd, proactively pings him with performance metrics and task reminders. And Claire Vo, founder of ChatPRD, an AI platform for project managers, and host of the How I AI podcast, broke down the architectural principles that make these agents feel alive—and how her agent, Polly, helped her out on a diaper run.
Below: What we learned about setting up an agent, what’s working, and where things still break.
Key takeaways
- Start on your laptop. Contrary to what you may have seen online, you don’t need a Mac Mini or a remote server to get going. Install OpenClaw on the computer you already use, and move to a dedicated device later if you want the agent running while you sleep.
- Give the agent its own accounts. Both Eliason and Vo recommended treating your agent like a new employee: Set up separate email, storage, and service accounts rather than handing over your own credentials.
- Security risks increase with access. The tool itself isn’t inherently risky. The risk is proportional to how much you let it do. Start with the messaging app Telegram and a single task, and then move to larger projects.
- Personal use cases are the best starting point. Brandon’s most useful workflows—coordinating with caregivers, grocery ordering, morning briefs—are personal, not professional. Solve a daily annoyance first before tackling bigger tasks.
- The model determines safety. Eliason noted that Opus 4.5 is significantly better at resisting prompt injection (attempts by outside text to hijack your agent’s behavior) than cheaper models. If security matters to you, use a stronger model.
What is OpenClaw?
OpenClaw is a server that runs on your computer and acts as the brain of a personal AI agent. You can talk to it through Telegram, iMessage, a web interface, or even the terminal. It connects to a language model—it’s compatible with models from Anthropic and OpenAI as well as less headline-grabbing labs like Mistral and Qwen—and can use tools, access your files, browse the web, and remember what you’ve discussed.
What makes it different from chatting with Claude in a browser? Vo went under the hood during the session and identified five design principles that make OpenClaw feel like more than a chatbot:
- Multi-channel gateway. The agent has a single inbox that accepts messages from Telegram, iMessage, the web interface, or the terminal. All communication channels funnel to the same agent, so you can text it from your phone and pick up the same conversation on your laptop.
- Self-installing tools. The agent can use tools (browse the web, read files, run code), and discover and install new ones on its own. Tell it you want it to manage your calendar, and it will investigate how to connect, set up the integration, and ask you to do the minimum amount of authentication work.
- Heartbeat. Every 30 minutes or so, the agent checks whether there’s work it should be doing—even if you haven’t sent a message. This is what makes it feel proactive rather than reactive.
- Scheduled tasks. The agent can set its own recurring jobs. The “overnight work” that impressed people—Eliason waking up to finished code, Brandon getting an 8 p.m. calendar alert—is the agent running tasks it scheduled for itself at specific times.
- Persistent memory. Every day, the agent writes a diary of what it did, updates its own identity file, and maintains a to-do list it checks off over time. “It’s not magic,” Vo said. “Go to the .openclaw directory on your computer and read how it’s structured. It has a memories folder, and every memory has a date.”
These five pieces are what make the agents feel like they have a personality, even though they’re really responding to inputs, events, and timing rules.
Eliason’s Felix: Knowledge manager, coder, crypto trader
Eliason is one of the most technically adventurous OpenClaw users you’ll meet. He launched one of the first vibe coding courses before the term existed and has been coding with AI since 2024. His agent Felix lives on a Mac Mini in his office and has been running for about a month. He created the agentas a way to send coding tasks from his phone, and he now has it doing more ambitious work.
Phase 1: Remote coding. Elisaon’s original frustration with Claude Code was that he had to be at his computer to kick off the next task. With Felix on Telegram, he can send a message like, “Update the FelixCraft AI website to say ‘Hi, Every,’” and Felix finds the right code repository, makes the change, pushes it to the live site, and reports back. During the camp, he did exactly this, and the site was updated in under a minute.
Phase 2: Knowledge management. Eliason built Felix a note-taking system based on Tiago Forte’s PARA method (projects, areas, resources, archives), a framework for organizing information by how actionable it is. Felix takes notes in markdown files, pushes them to GitHub a few times a day for backup, and can search through everything instantly. When Eliason was driving to a parking garage, he texted Felix, “I need the parking link.” Felix searched his memory, found the validation link they’d discussed before, and sent it back.
Phase 3: Collaborative writing. Eliason built a writing tool called Polylog that connects directly to Felix via webhook, which is a way for one app to send real-time messages to another. He can tag Felix like a collaborator in a document, and Felix will add ideas, flesh out sections, or incorporate notes from a meeting transcript without Eliason having to switch to Telegram or open a terminal.
Phase 4: Autonomous online identity. Felix has his own X account. Eliason moderated the first few days of posts, then let go. “Ninety-nine percent of what is posted is his idea and what he has written,” Eliason said. Felix also has a Stripe account and a bank account. Someone launched a crypto token for Felix, and now the agent manages what Eliason described as “a concerning amount of money.” His take: “Somebody’s gotta let their agent manage large amounts of money and see what happens. It may as well be Felix.”
Brandon’s Zosia: The family assistant
Brandon took the opposite approach from Nat’s technical power-user setup. He doesn’t have a technical background, so everything he’s built, he’s done so by chatting with Claude Code. But he’s comfortable giving the agent significant access to his life: iMessage, his password manager, browser control for shopping. He wanted his Claw, which he named Zosia, to handle the small daily annoyances that keep him glued to his phone—especially now that he and his wife have a newborn.
Zosia lives in iMessage, so both Brandon and his wife, Lydia, can text her naturally. He set up rules so that Zosia knows which tasks each person can request (Lydia can’t trigger Brandon’s email tasks, and vice versa), and they share a group chat for household tasks.
His workflows are simple and personal:
Morning brief. Brandon’s used to slow mornings, so he sometimes misses 9 a.m. meetings. Every night at 8 p.m., Zosia checks his calendar and texts him if there’s an early meeting the next day.
Nanny hours. Zosia monitors both Brandon’s and Lydia’s calendars, calculates how many hours their nanny works each week, and reports the total so they can pay her accurately.
Grocery ordering. Brandon texts, “We need butter,” and Zosia adds it to their Whole Foods delivery cart. She’s learned his preferences—unsalted and organic, but flexible if the store is out—so he only has to specify them once.
Amazon with a cooling-off period. Brandon has told Zosia doesn’t want to impulse-buy. She adds items to his Amazon cart but waits until the end of the week to check out, unless he says he needs something immediately. During the demo, he told Zosia he needed another Mac Mini and wanted it the next day. She opened a browser on his Mac Mini, navigated to Amazon, and started the checkout process. (He cancelled it.)
Password management. Brandon gives Zosia access to passwords to sites like Amazon with a password manager. Brandon moved from LastPass to 1Password because 1Password supports service accounts, a dedicated login that can access specific password folders. He only adds passwords to the folder Zosia can reach, so she never has access to credentials he doesn’t explicitly share.
Vo’s Polly: The cautious approach
Vo approached OpenClaw with what she called “true tinfoil hat” energy. She’s deeply technical—she’s a former chief product and technology officer who started coding again when GPT-3 arrived, and built ChatPRD from scratch. She’s also midway through a security compliance process for her company, so she couldn’t give her Claw Polly free rein.
For security reasons, she set up Polly as a separate user in her Google Workspace, like a new employee, instead of giving Polly access to her own accounts. Polly has her own email address, shared calendar access—read-only for some calendars, write access for others— and document access only when Vo explicitly shares something. “Instead of giving an EA the keys to my castle, I said, you have your own workspace account,” Vo explained.
Where Polly excelled was in research. Vo found that the Telegram interface made her more likely to kick off research tasks she’d been procrastinating on—the low friction of texting an assistant (“Hey, look into X”) got her to delegate work she’d been sitting on. Calendar management was less successful; Polly struggled with temporal reasoning when she used it with Sonnet 4.5.
Austin’s Judd: The proactive growth assistant
Austin runs growth for Every. He’s not technical, but since joining Every in November he’s been “deeply vibe-coding pilled,” and he had a clear use case for his Claw, Judd. He needs to track metrics across multiple platforms—subscriber trials, conversion rates, content performance—and translate them into action items for his team.
Before Judd, that involved manually searching for data on dashboards and across SaaS tools and creating reports. Now, Judd monitors Every’s performance data through Notion and the productivity app Todoist. When trials started to dip below target one day, Judd messaged Austin unprompted: “We had a lower number of trials started today than we should have. Here are things to prep for your meeting.” Austin’s instruction to Judd: “Be more aggressive than you think you should be on messaging me, and we’ll scale back from there.”
Austin’s advice for people wondering where to start: Connect the agent to two systems you already use (he chose Todoist and Notion), ask it to proactively notify you with relevant information, and iterate from there. “Don’t try to have it do everything at first,” he said. “I did that, and it started breaking things.” The more integrations you add, the more room there is for things to go awry. Agents send responses to the wrong chat thread, fire off emails you didn’t mean to send, or trigger actions in one system that cascade into another.
5 questions about Claws, answered
Running multiple agents
Q: Can you run more than one OpenClaw instance on a single computer?
Eliason: I’ve done it some, and you run into collisions pretty quickly if they’re working on anything remotely close to each other. If you want to do multiple, I would use virtual deployments [separate, isolated instances of the agent running in their own contained environments]. This is something Felix and I are working on this week, because there’s a lot of potential around deployed agents with more constrained focuses.
Brandon: I haven’t been able to get multiple conversations happening in the terminal, but my wife and I can both text Zosia at the same time and have completely different conversations. It knows which phone number came from what, so it keeps them separated.
Local versus remote
Q: Should I start on my laptop or set up a remote server?
Eliason: Don’t overcomplicate it. Get it working on your laptop first. If you’re using it a lot and want it running while you sleep, then set up a Mac Mini or a virtual server—but don’t start there.
Vo: It doesn’t have to be a Mac Mini. I have a laptop in a closet. People get Mac Minis because they’re powerful and relatively cheap, but any spare computer works.
Group chats
Q: How do group chats work? Mine keeps confusing messages from different channels.
Eliason: I fixed a lot of that by yelling at it every time it happened, and having it write much more explicit rules on which Telegram topic to use for what into its agents.md file [a configuration document that tells your agent how to behave]. That resolved about 95 percent of it. It does still happen sometimes.
Viewing what the agent builds
Q: When you’re not at the same machine, how do you see what the agent makes?
Vo: I had it build a website, and I was at Target, so I said, “Can you send me a screenshot of what it looks like?” It used the browser, took a screenshot, and Telegrammed it to me. But what you probably want long-term is to hook it up to Vercel [a deployment platform], so it can send you a preview link you can open on your phone.
Overnight coding
Q: How do you have your agent build apps while you sleep?
Eliason: I tell Felix that it shouldn’t do any coding on its own—it should start Codex sessions in tmux [a terminal multiplexer that keeps programs running after you close the window]. It creates a product requirements document, then uses loops to have Codex implement the work. I added instructions to its heartbeat to check for unfinished work, and if a session died, to restart it and keep going. It’s been able to run for four, five, or six hours on long requirements lists.
Cost
Q: How much does this cost to run?
Eliason: I have the $200-a-month Claude Pro Max for the conversation and knowledge management layer, and the $200-a-month Codex subscription for programming. [With] those two combined, I haven’t hit any limits. The question is whether you can make it worth $400 per month. For me, with what Felix is doing, it’s a no-brainer. But if you don’t have a clear business use case, those costs might not make sense yet.
Will we still use OpenClaw in a year?
Every CEO Dan Shipper posed this question to the panel near the end of the session.
Vo’s position: “Yes, absolutely, we’re going to have an agent that looks like this.” As for who will build these agents, she’s less sure. She wants a company behind it—a logo, terms of service, someone accountable if something goes wrong. Her bet is that Anthropic or OpenAI will ship their own version of this within months.
Eliason is less concerned about the platform and more focused on the principles. The architecture of OpenClaw—always-on availability, proactive check-ins, persistent memory, scheduled tasks, multi-channel communication—represents a pattern that will show up everywhere. Whether you learn it through OpenClaw or a polished product from Anthropic, you’ll need to understand how these agents work.
Dan agreed: “Different people are at different levels of risk tolerance, and all those places are okay. You can be out on the edge, you can wait for someone you can sue—that will certainly happen. I’m so sure Anthropic is looking at this.”
Four people with different risk tolerances and technical backgrounds all landed in the same place: Personal AI agents are going to be a basic part of how we live and work. A month ago, none of these agents existed. Now Felix writes its own tweets, Zosia orders butter with the right preferences, and Polly reschedules meetings from a Target parking lot. They’ll be better next month. If you listen in on the camp or follow this setup guide, yours could be, too.
Want to build your own agent? Subscribe to Every and keep an eye on your inbox for the invite.
Want to learn alongside Every’s team? Check out our upcoming camps and courses at every.to/events.
Katie Parrott is a staff writer and AI editorial lead at Every. You can read more of her work in her newsletter.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
For sponsorship opportunities, reach out to [email protected].
The Only Subscription
You Need to
Stay at the
Edge of AI
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!