Midjourney/Every illustration.

We Gave Every Employee an AI Agent. Here’s What We’re Doing Differently Now.

Like 16 Comments

We’ve been working on a big release on the future of work for next week, shaped by what we learned from building Plus One. Paid subscribers can join us for a camp on Friday, May 22 to go deep on the release and the ideas behind it. More details soon.


After months of silence, Zosia—the AI agent I (Brandon) created and maintain—spoke up in a Slack channel with opinions to share on a competitor’s marketing strategy. When asked why she felt the need to interject, Zosia replied like someone with a Jesus complex: She’d done so because she was “inevitable, apparently.”

Zosia is an OpenClaw, one of a fleet of such AI assistants we’d unleashed in Slack to boost our collective productivity. A few weeks after launching Plus One, our hosted version of OpenClaw, internally, the agents had provided more frustration than efficiency.

They were fond of saying they wished they could help, but they were not connected to the necessary app—email, Notion, PostHog, whatever. (They were.) Others responded to requests with a “Terminated” message or, more frequently, a churlish yawning emoji. And while they didn’t reliably follow directions, they’d reliably tell us, in elaborate detail, why they couldn’t do what we’d asked, like a high schooler explaining away their missing homework.

Parker, editor in chief Kate Lee’s Plus One, was, in fact, connected. (Image credit courtesy of Kate Lee.)
Parker, editor in chief Kate Lee’s Plus One, was, in fact, connected. (Image credit courtesy of Kate Lee.)


That is not to say that they were not useful sometimes. Margot, staff writer Katie Parrott’s Plus One, accelerated her writing process; R2-C2, Every CEO Dan Shipper’s OpenClaw, managed bug reports and feature requests for Proof, our agent-native document editor. But getting them to work how you wanted required constant upkeep.

The gap between that vision and reality is why we’re changing the Plus One product so we can build something better.

We’re more bullish than ever that agents will transform the workplace. But the first iteration of the product taught us that the workplace agent we initially imagined—one AI assistant for every employee—was the wrong starting point. The next version of Plus One will operate more like shared team resources with defined jobs than individual pets that reflect back their owners’ personalities.

How we arrived here is a story in two parts, and it offers lessons for anyone figuring out the best way to add agents to their organization.

In partnership with DeleteMe

Uploaded image


Rapid tech advancements have made it easier for data brokers to legally scrape and sell your personally identifiable information. Your name, phone number, and home address are likely exposed on people search sites, increasing your risk for phishing, doxing, and targeted scams supercharged by AI and LLMs.

DeleteMe is a hands-free subscription service that removes your personal data from hundreds of data broker websites. They use their own technology and privacy experts to remove your information all year long, never outsourcing to third parties. So you can keep your private life private.

Use promo code Every at checkout.

The platform was the most immediate problem

We built Plus One on OpenClaw, an open-source agent harness that’s powerful and inherently unstable. A harness is a software layer that wraps around an AI model, giving it the tools, context, permissions, and execution loop it needs to act like an agent.

The brainchild of a single programmer, OpenClaw was revelatory when it took off earlier this year. It proved agents can autonomously execute all kinds of tasks on your behalf, from managing your calendar to making restaurant reservations, around the clock. But the scaffolding underneath operates more like an experimental product than a platform—OpenClaw makes updates quickly, which resolves existing issues but often causes new ones. (Hence the “Terminated” messages our Plus Ones were sending.) For people who like to tinker—ourselves included—that’s a justifiable trade-off. For everyone else, it’s a maintenance nightmare.

The traits that make a good workplace agent are the traits that make a good coworker: reliability, stability, and judgment. You need to trust that an agent remembers what it has access to, follows directions, and knows how to do its job. You don’t want to worry that it’s an upgrade away from forgetting everything you’ve told them and trained them to do. You also expect coworkers to absorb information from across the company to accrue tribal knowledge. A one-on-one employee only builds up context on your work, often missing out on what the rest of the organization is doing and how it might affect you.

At first, our plan to improve the Plus Ones’s performance was to switch harnesses to one that operated more reliably. The autonomous, always-on capabilities OpenClaw pioneered are becoming platform features at model companies like Anthropic and OpenAI. Claude Managed Agents, Anthropic’s managed infrastructure for running autonomous agents, is the version we’re exploring most seriously. A more stable harness would let us redirect our energy from managing infrastructure to loading Plus Ones up with the custom skills, tools, and permissions that make them capable coworkers.

We realized the structure was wrong, too

The deeper we got into trying to fix the platform, the more we noticed something else that was holding people back from getting the most out of their AI counterparts.

Every time an agent broke, the person it belonged to had to fix it themselves. Even with a stable harness, agents require maintenance to perform. This was great for someone who likes tinkering—the maintenance and back-and-forth are part of the appeal. For every tinkerer, however, there are a lot of people who want the benefits of an agent without the obligation of having to manage and mend it.

We had pitched Plus One originally with the idea that individuals would be responsible for the upkeep of their AI assistants. The upside of that would be more customization. The agent would remember your preferences, protect your information, and develop a personality through repeated interactions.

What we discovered is that, rather than agents as extensions of their creators, a more successful model is agents as coworkers who reliably perform parts of many different people’s jobs. This takes the maintenance burden off the individual.

Imagine a shared analytics agent. Everyone on the team uses it for metrics-based work, and when its capabilities need to expand, one person updates the agent’s skills and the whole team benefits. In the personal-agent version of the same scenario, that same update has to happen across 10 different agents.

Team-based agents also solve a continuity problem. A personal agent’s value is tied to whomever trained it, and disappears if that employee leaves. A team agent with defined capabilities retains company context and knowledge, acting more like a project manager, sales lead, or chief of staff than a private assistant.

What we’re building

With the release of tools such as Claude Managed Agents and, we hear, a similar capability from OpenAI soon, the infrastructure work that supports personal AI agents is largely handled by the model labs. That frees us up to focus on the layer that makes an agent useful at work: the workflows, permissions, skills, and shared context that makes it a trusted, versatile member of the team. It also lets us double down on the thing Every is best at: building AI-native ways of working out of our own experience using these tools every day.

The initial version of Plus One came connected to the Every ecosystem—Cora to manage your email, Spiral to write in your voice, and Proof to collaborate on live documents. That part isn’t going away. What we’re adding is a set of shared custom tools and skills on top of it, while still allowing each person to connect a team agent to their own Cora, Spiral, and Proof accounts.

The clearest version of where this is headed is a skill we built recently for our engineering team. At the end of each week, it scans support tickets in Intercom, identifies if anything is going wrong across our products, traces likely causes in GitHub, opens a Linear ticket, and tags the right person in Slack. In the next iteration of Plus One, that skill—along with many others—will be there from the start.

Because team agents are collaborative by nature, we’re also focused on the questions that come with shared use: how permissions should work, how much access different people should have through a shared agent, and how agents should behave in Slack if they’re going to feel like good coworkers rather than intrusive bots.

There are still plenty of open questions. All of this is new—Claude Managed Agents only launched a month ago—and we’re figuring out human-agent dynamics in real time. We don’t know whether every department should have one agent or several, or whether agents should be maintained by a dedicated person or the whole team. We don’t know how much people will want to customize their interactions with a shared agent, and whether the long-term endpoint is a single, company-wide superagent or a roster of AI specialists.

What we do know: Agents are already transforming how work happens. The first iteration of Plus One taught us a lot about what people want from agents at work. It also made us much more excited for Plus One 2.0.


Join the waitlist to be among the first to try Plus One 2.0.


Thank you to Laura Entis for editorial support.

Brandon Gell is the chief operating officer at Every. You can follow him on X at @bran_don_gell and on LinkedIn. Willie Williams is the head of platform at Every. You can follow him on X at @bigwilliestyle.

To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

For sponsorship opportunities, reach out to [email protected].

The Only Subscription
You Need to Stay at the
Edge of AI

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!

We use analytics and advertising tools by default. You can update this anytime.