DALL-E/Every illustration.

What the Media Is Getting Wrong About AI

The future must be built on truth

60

Sponsored By: Destiny


Own Game-changing Companies

Venture capital investing has long been limited to a select few—until now. 

With the Destiny Tech100 (DXYZ) , you'll be able to invest in top private companies like OpenAI and SpaceX from the convenience of your brokerage account. 

Claim your free share before it lists on the NYSE. Sponsored by Destiny.

AI may be the most significant invention since electricity. It is a marvelous, spectacular evolution in our relationship with computers.

It is also terrifying. 

Some people, maybe many people, will lose their jobs if AI continues to progress at its current rate. This disruption is scary. When you mix AI fears with the crash in public opinion about the tech sector, it is not surprising that misinformation is rampant. What is surprising is how much of this incorrect analysis comes from mainstream writers with large followings. 

I strongly believe we should be rigorously examining AI. Technology should be critiqued, written about, and, most importantly, utilized. It can only make our world better when examined with truth, not hyperbole. AI is important enough that society should be considering the implications of its deployment. But progress can’t be made when everything you believe about a sector is incorrect. 

The piece that most clearly symbolizes this trend was published by Ed Zitron with the wonderfully cranky title of “Subprime Intelligence.” Zitron is the founder and CEO of tech PR company EZPR. He’s also a firebrand of a writer who has a particular talent for the incensed outrage that does so well on X, where his post went viral. His new podcast, on which he analyzes tech news, hit number one in the tech category on Apple. I genuinely think he has the gift of the gab and congratulate him on his success. However, his argument against AI is of an all-too-common variety among mainstream tech critics: It is weak, ungenerous, and riddled with mistakes. 

Normally, I wouldn’t focus on a single person for a rebuttal. But his reasoning is emblematic of popular writing on AI today and is therefore a useful device so as not to make straw man arguments. By walking through his thesis, I hope to more broadly push back against current coverage to encourage our discourse to move in a more accurate direction. 

Zitron’s argument can be summarized as follows:

  1. Artificial intelligence is limited by hallucinations (i.e., the model making stuff up). These problems can never be fixed.
  2. Generative AI’s creative products are unusable because what it generates isn’t perfect.
  3. Companies are not actually using AI. They’re just going along with the fad.
  4. AI Companies have lower gross margins than software companies, and therefore, they are bad businesses. 
  5. There are no essential use cases or killer app for AI.

Unfortunately, each of his points is incorrect.

VC investing has traditionally been reserved to a privileged few. But now Destiny Tech100 (DXYZ) is changing that. You can own a piece of groundbreaking private companies such as OpenAI and SpaceX, all from the convenience of your brokerage account. Claim your free share before it hits the NYSE. Sponsored by Destiny.

Hallucinated problems

“Despite what fantasists may tell you, [hallucinations] are not ‘kinks’ to work out of artificial intelligence models—these are the hard limits, the restraints that come when you try to mimic knowledge with mathematics. You cannot ‘fix’ hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.”

It is remarkable to claim that hallucinations present hard limits to AI’s possibilities, or that they can never be fixed. Four days before Zitron published his piece, Google released its new model, Gemini 1.5, which makes huge strides towards fixing them. Gemini 1.5 can take a context window of 1 million tokens and recall text data with 99 percent accuracy. That is like being able to recall any sentence from the entire Harry Potter series with a photographic memory. Google said that it had been able to test Gemini all the way up to 10 million tokens. OpenAI’s most powerful model currently tops off at 128,000 tokens. With Gemini 1.5, you can drop your entire dataset, documentation, script, or codebase into the AI, and it will be able to summarize, improve, or fix it with near-perfect accuracy.

Technically, this model does not completely solve hallucinations for LLMs. Longer context windows won’t stop these models from hallucinating when they don't have access to datasets. But it does allow an LLM to recall, with superhuman accuracy, useful data and insights—a performance that violates Zitron’s claim that LLMs can never “predict things…flawlessly or reliably.” Additionally, Google isn’t the only company that has achieved this milestone. The Information reports that a startup called Magic has an LLM that can handle 3.5 million words of text.

A good tech critic would be aware that not only have context windows solved many hallucination problems, but through techniques like Retrieval Augmented Generation (RAG), error rates will decrease such that they are better than human levels of accuracy.

A good tech critic would realize that the problem isn’t the hallucinations; it is how and when LLMs gain access to data, and who controls that data pipeline. Understanding how ChatGPT gets current information is a more accurate and long-term view of this technology than worrying about hallucinations. 

Even if you assume that Zitron missed Google’s Gemini launch, his assertion is nonsensical. We don’t need these systems to be flawless—they just have to be better than human error. RAG is already used in products like Notion AI, which has over 4 million users. You drop your data, calendar, and notes into Notion, and the AI can retrieve what you’re looking for. As context windows get longer, models get smarter, and techniques get refined, it is reasonable to assume that hallucinations become a problem that we learn to work around (just like human error). We’re already making large strides, and ChatGPT only came out a year and a half ago. 

No, the product doesn’t have to be perfect

A theme of Zitron’s argument is that generative AI’s output isn’t good enough. He writes: “One cannot make a watchable movie, TV show, or even commercial out of outputs that aren't consistent from clip to clip, as even the smallest errors are outright repulsive to viewers.”

Again, this is incorrect and misses the point. The point of generative AI is not to make clips from scratch that are immediately watchable. Zitron trips up because he believes that the sci-fi version of AI that’s been marketed to the public— beautiful movies, on demand, that anyone can make—is what founders are aiming for. Rather, we are shooting for the moon so that if we fall, we land on a cloud—and that cloud entails immense gains in productivity.

The goal of these tools is to help creators make better things faster. And they’ve already done that! All of our graphics at Every are made by one mega-talented guy in Milan who simultaneously handles our ads, partnerships, brands, podcasting editing, and course operations while also going viral on X. All of this was previously beyond the scope of any one person, but he has taste and vision, and uses generative AI to do more, faster. These tools are also used to make people at top-tier creative shops, like The Late Show with Stephen Colbert, Vox, and Pentagram, more productive.

Zitron’s critique about perfection is particularly wrong because perfection has never been the main requirement of art. Popular and heralded movies aren’t immune from errors, like a visible cameraman in Harry Potter and the Chamber of Secrets and a plastic baby in (Best Picture-nominated) American Sniper. Humans make mistakes all the time! We tolerate them because the end product is good enough that we don’t care. 

A good critique would consider AI tooling in the same way. If we do get to the point where AI can create entire movies, errors will not be “outright repulsive”; they’ll be a normal fact of creative work. And expanding access to creativity is always a good thing. The camera, the printing press, and other creative tools have only made the world better. A good critic would know that we shouldn’t worry about more people being able to make images or movies—it is the increase in competition for distribution that matters. More people making more entertainment products means that there will be less profit to go around, as I argued in September of 2022, when DALL-E was released.

A good critic would know that "flaws'' end up creating a new language for a new form of art. Right now AI video is capable of creating montages, which is becoming its own form that is recognizable and distinct, in the same way YouTube videos are distinct from Hollywood movies. Getting upset about AI making videos easier to create is like getting upset about typewriters making new book formats possible. 

In practice, the internet already has infinite content. AI just makes it possible to create higher-quality products. The concern should be focused on attention aggregation platforms that control which AI content gets shown (like Meta, X, or Google).

Companies are actually using AI

Next up, Zitron questions whether anyone is making money from generative AI. 

“A McKinsey report from August 2023 says that 55% of respondents' organizations have adopted AI, yet only 23% of said respondents said that more than 5% of their Earnings Before Interest (EBIT) was attributable to to their use of AI—a similar number to their 2022 report, one which was published before generative AI was widely available. In plain English, this means that while generative AI is being shoved into plenty of places, it doesn't seem to be generating organizations money.”

Ah, finally, data! I looked up the study he mentioned. Literally the next sentence after the data point he pulled was this: 

“Organizations continue to see returns in the business areas in which they are using AI, and they plan to increase investment in the years ahead. We see a majority of respondents reporting AI-related revenue increases within each business function using AI. And looking ahead, more than two-thirds expect their organizations to increase their AI investment over the next three years.”

This is so bad-faith that I can’t help but feel that Zitron cherry-picked data points hoping no one would read his sources. The same report he cites as his data-driven coup de gras turns out to show that survey respondents 1) want to spend more money on AI (implying they are seeing a positive return) and 2) have a majority of the firms have AI-related revenue increasing in every single part of the business using the tech. In contrast to Zitron’s arguments that “it doesn’t seem to be generating organizations money,” the same study says that most companies are making money! 

But wait, Zitron sees a deeper conspiracy afoot. 

“While Microsoft, Amazon, and Google have technically ‘invested’ in these [AI] companies, they've really created guaranteed revenue streams, investing money to create customers that are effectively obliged to spend their investment dollars on their own services. As the use of artificial intelligence grows, so do these revenue streams, forcing almost every single dollar spent on AI into the hands of a few trillion-dollar tech firms.”

One popular conspiracy theory is that cloud providers like Microsoft don’t have real revenue because foundation model companies like OpenAI raised money from cloud providers. These cloud providers didn’t invest cash; instead, they gave the AI companies compute credits so that the latter could make their products. Microsoft “invests” and gets to immediately recognize that investment as revenue. This is correct! However, for this criticism to have merit, OpenAI would need to be its only customer. 

This is, again, wrong. In its latest earnings call, Microsoft CEO Satya Nadella told analysts that the Azure AI division had 53,000 customers. So it has OpenAI and 52,999 other companies. Plus, the Azure division does over $25 billion in revenue a quarter. The $10 billion OpenAI has raised in total is a pittance in comparison. And keep in mind that that $10 billion is spread out over the lifetime of the deal, while Microsoft is doing roughly $6 billion a year in Azure AI revenue.

Furthermore, AI clouds do not spring ex nihilo. The GPUs powering them cost cold, hard cash. Even if you ignored all previous evidence, Nvidia’s revenue is up 265 percent year over year, to $22.1 billion. While the credits Microsoft gave OpenAI may be more accounting trickery than actual dollars, the cash it’s paying for the GPUs to power those AI workloads is not. Companies don’t spend $22 billion in cash to purchase a few billion in fake revenue. 

A good critic would know all this and realize that what matters isn’t the investing dollars, it is the IP ownership and competitive dynamics. OpenAI is simultaneously deeply partnered with Microsoft and building products (like search) that are directly competitive with it. Who owns the IP? And back to the hallucinations questions, where does my data go when I use ChatGPT? Who owns it? That is what matters, not Zitron’s misunderstanding of income statements. 

OK, well, people are buying AI, but AI companies are bad businesses

Here is Zitron again: 

“While it's hard to tell precisely how much it’s losing, The Information reported in mid-2023 that OpenAI's losses ‘doubled’ in 2022 to $540 million as it developed ChatGPT, at a time when it wasn’t quite so demanding of cloud computing resources. Reports suggest that artificial intelligence companies have worse margins than most software startups due to the vast cost of building and maintaining their models, with gross margins in the 50-55% range—meaning the money that it actually makes after incurring direct costs like power and cloud compute. This figure is way below the 75-90% that modern software companies have. In practical terms, this means that the raw infrastructure firms—the companies that allow startups to integrate AI in the first place—are not particularly healthy businesses, and they're taking home far less of their money as actual revenue.”

Yes, a SaaS business with 50 percent gross margins would be bad. Also, if my grandma had wheels, she would’ve been a bike. AI companies are not SaaS companies! The business model is entirely different. OpenAI and its ilk charge on a usage basis, so of course the margin profile is not the same as that of a SaaS company. Zitron completely misses the competitive set. A better comparison would be other software infrastructure companies that charge on a usage basis, like Snowflake (roughly 67 percent gross margin) or Twilio (49 percent). 

A good tech critic would know that almost all startups lose a lot of money in the beginning. Yes, many startups fail, but many also succeed! Since OpenAI is one of the fastest-growing companies of all time in terms of both revenue and users, I think the losses are, relatively, not that big of a deal. There is a lot we don’t know about gross margins because these companies aren’t public (and much will depend on how the channel partnerships with Microsoft’s salesforce works, as I mentioned earlier), but give me the problem of $2 billion in annualized revenue any day. 

Money doesn’t grow on trees

The reason I wrote this piece was this passage:

“As it stands, generative AI (and AI in general) may have some use. Yet even with thousands of headlines, billions of dollars of investment, and trillions of tokens run through various large language models, there are no essential artificial intelligence use cases, and no killer apps outside of non-generative assistants like Alexa that are now having generative AI forced into them for no apparent reason. I consider myself relatively tuned into the tech ecosystem, and I read every single tech publication regularly, yet I'm struggling to point to anything that generative AI has done other than reignite the flames of venture capital. There are cool little app integrations, interesting things like live translation in Samsung devices, but these are features, not applications. And if there are true industry-changing possibilities waiting for us on the other side, I am yet to hear them outside of the fan fiction of Silicon Valley hucksters.”

I called a bubble on AI pricing last February, so hopefully I do not get lumped into Zitron’s “Silicon Valley hucksters” group, but there are many successful generative AI companies and products used by millions: 

It would be reasonable to critique how these companies are utilizing AI. However, calling companies “fan fiction” is inaccurate. In contrast to Zitron’s claim that there are “no killer apps,” there are dozens of startups scaling to millions in revenue and users.

There are many legitimate aspects of AI to analyze. We should be examining the impact it will have on jobs. We should be concerned about the concentration of knowledge and power these systems create. However, we should not be worried about whether or not technology is real.


Evan Armstrong is the lead writer for Every, where he writes the Napkin Math column. You can follow him on X at @itsurboyevan and on LinkedIn, and Every on X at @every and on LinkedIn.

Find Out What
Comes Next in Tech.

Start your free trial.

New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.

Subscribe

Already have an account? Sign in

What's included?

  • Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
  • Full access to an archive of hundreds of in-depth articles
  • Unlimited software access to Spiral, Sparkle, and Lex

  • Priority access and subscriber-only discounts to courses, events, and more
  • Ad-free experience
  • Access to our Discord community

Thanks to our sponsor: Destiny

If you’re fond of unicorns, AI, and space—get a Destiny Tech100 share. You'll be able to own a piece of OpenAI, SpaceX, Discord, Stripe, and others. Claim your free share before the NYSE listing. Sponsored by Destiny.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
Every

What Comes Next in Tech

Subscribe to get new ideas about the future of business, technology, and the self—every day