The Knowledge Economy Is Over. Welcome to the Allocation Economy
In the age of AI, every maker becomes a manager
Want to achieve your 2024 goals?
Hey! Dan Shipper here. Registration is open for my new course, Maximize Your Mind With ChatGPT. It marries cutting-edge AI with the best of what psychology knows about developing your potential—so you can reach your goals in 2024.
I’m teaching it with clinical psychologist Dr. Gena Gorlin, and it starts on February 5. Curious?
Time isn’t as linear as you think. It has ripples and folds like smooth silk. It doubles back on itself, and if you know where to look, you can catch the future shimmering in the present.
(This is what people don’t understand about visionaries: They don’t need to predict the future. They learn to snatch it out of the folds of time and wear it around their bodies like a flowing cloak.)
I think I caught a tiny piece of the future recently, and I want to tell you about it.
Last week I wrote about how ChatGPT changed my conception of intelligence and the way I see the world. I’ve started to see ChatGPT as a summarizer of human knowledge, and once I made that connection, I started to see summarizing everywhere: in the code I write (summaries of what’s on StackOverflow), and the emails I send (summaries of meetings I had), and the articles I write (summaries of books I read).
Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.
As Every’s Evan Armstrong argued several months ago, “AI is an abstraction layer over lower-level thinking.” That lower-level thinking is, largely, summarizing.
If I’m using ChatGPT in this way today, there’s a good chance this behavior—handing off summarizing to AI—is going to become widespread in the future. That could have a significant impact on the economy.
This is what I mean by catching the future in the present and the non-linearity of time. If we extrapolate my experience with ChatGPT, we can glean what the next few years of our work lives might look like.
The end of the knowledge economy
We live in a knowledge economy. What you know—and your ability to bring it to bear in any given circumstance—is what creates economic value for you. This was primarily driven by the advent of personal computers and the internet, starting in the 1970s and accelerating through today.
But what happens when that very skill—knowing and utilizing the right knowledge at the right time—becomes something that computers can do faster and sometimes just as well as we can?
We’ll go from makers to managers, from doing the work to learning how to allocate resources—choosing which work to be done, deciding whether work is good enough, and editing it when it’s not.
It means a transition from a knowledge economy to an allocation economy. You won’t be judged on how much you know, but instead on how well you can allocate and manage the resources to get work done.
There’s already a class of people who are engaged in this kind of work every day: managers. But there are only about 1 million managers in the U.S., or about 12% of the workforce. They need to know things like how to evaluate talent, manage without micromanaging, and estimate how long a project will take. Individual contributors—the people in the rest of the economy, who do the actual work—don't need that skill today.
But in this new economy, the allocation economy, they will. Even junior employees will be expected to use AI, which will force them into the role of manager—model manager. Instead of managing humans, they’ll be allocating work to AI models and making sure the work gets done well. They’ll need many of the same skills as human managers of today do (though in slightly modified form).
From maker to manager
Here are a few qualities that managers of today need that individual contributors of tomorrow—model managers—will need as part of the allocation economy.
A coherent vision
Today's managers need to have a coherent vision of the work they want to accomplish. Managers of humans need to craft a vision that is articulate, specific, concise, and rooted in a clear purpose. Model managers will need that same ability.
The better articulated your vision is, the more likely the model is going to be to carry it out appropriately. As prompts become more specific and concise, the work done will improve. Language models might not, themselves, need a clear purpose, but model managers will likely have to identify a clear purpose for their own sake and engagement with the work.
Articulating a concise, specific, and coherent vision is difficult. It’s a skill that is acquired over years of work. Much of it comes down to developing a taste for ideas and language. Luckily, that’s a place that language models can help as well.
A clear sense of taste
The best managers know what they want and how to talk about it. The worst managers are the ones who say, “It’s not right,” but when asked, “Why?” can’t express the problem.
Model managers will face the same issue. The better defined their taste, the better language models will be able to create something coherent for them. Luckily, language models are quite good at helping humans articulate and refine their taste. So it’s a skill that will probably become significantly more widely distributed in the future.
If you have clear taste and a coherent vision, the next thing you need to do is be able to evaluate who (or what) is capable of executing it.
The ability to evaluate talent
Every manager knows that hiring is everything. If employees are doing the work, the quality of the output is going to be a direct reflection of their skills and abilities. Being able to adequately judge employees’ skills and delegate tasks to people who can carry them out is a significant part of what makes a good manager.
Model managers of tomorrow will need to learn the same things. They’ll need to know which AI models to use for which tasks. They’ll need to be able to quickly evaluate new models that they’ve never used before to determine if they’re good enough. They’ll need to know how to break up complex tasks between different models suited to each piece of work in order to produce one work of the highest quality.
Evaluation of models will be a skill in its own right. But there’s reason to believe it will be easier to evaluate models than it is humans, if only because the former are easier to test. A model is accessible day or night, it’s usually cheap, it never gets bored or complains, and it returns results instantly. So model managers of tomorrow will have an advantage in learning these skills, because management skills of today are gate-kept by the relative expense of giving someone a team of people to work with.
Once they’ve assembled the resources they need to get work done, they’ll face the next challenge: making sure the work is good.
Knowing when to get into the details
The best managers know when and how to get into the details. Inexperienced managers make one of two mistakes. Some micromanage tasks to the point that they are doing the work for their employees, which doesn’t scale. Others delegate tasks to such a degree that they aren’t performed well, or are not done in a way that aligns with the organization’s goals.
Good managers know when to get into the details, and when to let their reports take the ball and run. They know which questions to ask, when to check in, and when to let things be. They understand that just because something isn’t done how they would do it doesn’t mean it hasn’t been done well.
These are not problems that individual contributors in the knowledge economy have to deal with. But they are the exact kind of problems that model managers in the allocation economy will face.
Knowing when and how to get into the details is a learnable skill—and luckily, language models will be built to intelligently check in during crucial periods where oversight is needed. So it won’t be completely on model managers to do this.
The big question is: Is all of this a good thing?
Is the allocation economy good for humanity?
A transition from a knowledge economy to an allocation economy is not likely to happen overnight. When we talk about doing “model management,” that’s going to look like replacing micro-skills—like summarizing meetings into emails—rather than entire tasks end to end, for a while, at least. Even if the capability is there to replace tasks, there are many parts of the economy that won’t catch up for a long time, if ever.
I recently got my pants tailored in Cobble Hill, Brooklyn. When I pulled out my credit card to pay for it, the lady behind the counter pointed at a paper sign taped to the wall: “No credit cards.” I think we’ll find a similar pace of adoption for language models: There will be many places where they could be used to augment or replace human labor where they are not. These will be for many different reasons: inertia, regulation, risk, or brand.
This, I think, is a good thing. When it comes to change, the dose makes the poison. The economy is big and complex, and I think we’ll have time to adapt to these changes. And the slow handoff of human thinking to machine thinking is not new. Generative AI models are part of a long-running process.
In his 2013 book Average Is Over, economist Tyler Cowen wrote about a stratification in the economy driven by intelligent machines. He argued that there is a small, elite group of highly skilled workers who are able to work with computers that will reap large rewards—and that the rest of the economy may be left behind:
“If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.”
At the time, he wasn’t writing about generative AI models. He was writing about iPhones and the internet. But generative AI models extend the same trend.
People who are better equipped to use language models in their day-to-day lives will be at a significant advantage in the economy. There will be tremendous rewards for knowing how to allocate intelligence.
Today, management is a skill that only a select few know because it is expensive to train managers: You need to give them a team of humans to practice on. But AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being.
It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!
This is a product manager
the skills that product managers excel in – like crafting a vision, evaluating talent, and understanding when to delve into the details – become increasingly valuable.
@hollywoodsign +1 and fellow PM here :)
Interesting analysis Dan. I do agree that even people in junior positions will have to develop manager skills fairly soon on their career path. But I see this as an enhancement of the Knowledge Economy.
Effective managers have a good understanding of the work even if they aren't the ones doing the execution. Like you mentioned, vision and taste are key things a manager brings to the table. And these abilities are the result of knowledge acquired through consumption and experience. Further, I feel you can read a summary of a book or a report using AI, yes. But it is the unrelated anecdotes, the random footnotes, that often spark connections and insight that moves your skills and abilities forward.
AI helps us do focused and directed work much better. But intuition and inspiration can often strike when you read something unrelated but make a connection to what you're working on. I think Steven Johnson has expanded on this in his book, 'Where Good Ideas Come From.'
So far (some) knowledge workers could get by without being decent managers. But the advent of AI will mean that those who have knowledge+manager skills are likely to thrive compared to those who just have knowledge.
Great analysis. We are only in the beginning the major changes are probably not apparent yet. Think Mobile phones -text messages
Internet- attention economy
AI - who knows.
This is powerful technology already and more powerful models are just around the corner and we haven’t really begun deployment yet.
my son taught me part of this last year, where he used ChatGPT to create all the summaries to study, asked the bot to divide the topics and to propose the best strategy to tackle his exams, and he was only 12 at the time. I got furious cause I thought "gosh, he is not using his brain and learning how to summarize" then I realized that he was the future and I was the past, and quickly appreciate his candor. Now we compete to see how writes the best promps. Great article.
Thought provoking article. “Allocation” economy may be a bit of a stretch, when you consider that most of the jobs (e.g., production jobs, manufacturing, service, etc.) will allow that much discretion. Back to your point on the bifurcation of humans in the economy, “no average.”
th problem with this is where o we get the breakthrough thinking and new ideas ? Many many years ago I read the The Structure or Scientific revolutions by Thomas Kuhn. Highly recommend it which says major scientific breakthroughs come outside of exist structures as the threaten them but slowly capture the stars quo as best. How dies this happen with AI and innovative breakthrough not in the establishment. Does AI and managers slow this process or help? Big questions a why the development of AI can't be put in just the hans of Computer science people and tech startups and companies. You referenced at the beginning of the article our reality concept of time really has just changed. ... manager or AI aggregator that summarizes will never get this to you unit we figure out how AI processes and delivers unique material true stuff. And we don't have that yet
One more thought ....anone watch Jimmy Johnson halftime speech in how to get the Cowboys back in the game? There is a leader not a manager emotionally involved and wanting other to do the same. How's that figure in to the allocation economy which by it's name seems to be lacking of leadership
Amazingly creative piece. So much I learned from this. It's for me an awakening call to join the ranks of those who are making the strides .
Interesting to look at required skills and trainings for a future where everyone has AI assistants. Work could potentially change quite a bit. I have personally been focusing on the future of Human-AI interaction, as this new work mode that you describe so well will most likely require new interfaces to interact with AI assistants on a daily basis...for most employees, not only techies. If interested: gotohuman.com
This piece nailed it! We are just starting to scratch the surface, current AI UX is not adapted to this paradigm.
Tools that help people manage their projects end to end will become central. This will give rise to many solopreneurs and small but impactful companies.
My startup Scarlet AI is all about becoming this allocation platform.
“deciding whether work is good enough” I think this is key, and to make that decision you need a know at least a bit. I always think the best managers are the ones that have done the thing themselves at some point. They might not know everything but because they've done it before they can better direct. How do you think this fits into your theory?
**This is an interesting idea—non-participation.**
Now, there are reasons why you might have gone to that specific tailor. Perhaps the absence of credit card facilities wasn't enough to deter you; maybe it's the people you connect with or the quality of service that draws you in. But when it comes to straightforward product-for-money transactions, would going to a place where you don't need to carry cash make a difference?
Credit card facilities aren't mandatory, and neither is the use of AI in the workplace. However, when adopting them allows for a competitive edge, better service, higher-quality products, faster lead times, and lower costs—who would you choose?
Purchasing a subscription to an AI model is just acquiring knowledge, but **applying** it is where the real impact is felt. I'd argue that when applied effectively, AI can bring significant, non-obvious improvements to all these areas.
Take, for example, the educational field. Training products can be white-labeled—you buy a course off the shelf with all the learning material and assessments included and indeed, many colleges often do. But these typically lack depth and investment in learner skill acquisition because the primary audience isn't the student but the college purchasing it.
Now, if we choose not to buy these resources, we have to create them ourselves. Crafting these takes considerable effort: understanding the mechanics of what's being taught, knowing what's important to convey, keeping the reader engaged, and designing effective, easy-to-navigate assessments. These tasks require a significant amount of time to execute. Enter AI. With AI, moving from an idea to crafting and refining content becomes a much more streamlined process.
At each step, we're able to save time, improve quality, and reduce the cost of our offerings. Sure, others can choose not to participate, but they are at a significant disadvantage compared to a product we created with relative ease. And being an online course, it's not obvious to the customer that AI was utilized.
Perhaps we'll see some companies start to pull ahead by allocating tasks to leverage AI, silently outpacing the competition with no obvious indicators.