
Was this newsletter forwarded to you? Sign up to get it in your inbox.
I found myself shaking my head in disbelief and surprisingly emotional throughout the entire two days of Google I/O, the company’s annual developer conference that was held this week. The recording artist Toro y Moi kicked off the event with an AI-powered live music set, but it was his closing words that captured what I'd feel for the next 48 hours: "We're here today to see each other in person, and it's great to remember that people matter."
Since 2015 I’ve known AI could fundamentally change the world, but this was the first time I felt so clearly and viscerally that it’s actually happening—right now. Surprisingly, it might make our world more human, not less.
At every table I sat down at, strangers would look up with wonder in their eyes, desperate to talk about what they'd just seen. In just two days, Google announced more new products and technical advances than most companies have in their entire existence: Video generation tools Veo 3 and Flow took the internet by storm with generative media that was miles better than anything ever seen before. Gemini diffusion, a new type of architecture for language models, teased a not-so-distant future where these models are 20 times faster and can create full apps in the blink of an eye. Unreleased mixed-reality glasses with a built-in screen translated languages and streamed video live.
In all, there were literally 100 announcements. It was overwhelming. And it’s not like Google was saving up for this: The company casually released an updated Gemini 2.5 Pro—a model that was comfortably topping nearly every benchmark (at least until Claude 4 Opus’s release yesterday)—on a random Tuesday a few weeks ago.
There are plenty of great write-ups covering each release in detail. This is not one of them.
Instead, I want to talk about what the tsunami of releases adds up to. Google calls it its “Gemini Era” after its flagship AI models. This is what exponential growth looks like: We're hitting the knee of the curve where compounding gains suddenly become undeniably visible. Google has managed to align its immense resources, people, and vision toward a single clear goal—artificial general intelligence (AGI).
What does Google mean when it says AGI? Demis Hassabis, co-founder and CEO of Google's AI laboratory DeepMind, explained they're building toward AI that deeply understands our physical environment and the context we live in, beyond just symbols like math and language as current models do. AGI of this form would enable a universal AI assistant to “crack some areas of science, maybe one day some fundamental physics,” he said. Crucially, he believes this intelligence can amplify what makes us human, not replace it.
If 90%+ of your team isn’t using AI everyday, you’re already behind
You’re not going to get good at AI by nodding through another slide deck. Every Consulting helps teams level up—fast. We’ve trained private equity firms, leading hedge funds, and Fortune 500 companies. Now it’s your turn. Customized training. Hand-held development. A rollout strategy your team will actually use. Let’s make your organization AI-native before your competitors do.
Google's unique advantage
Google has brilliant AI researchers, but that’s not why they're winning...
Become a paid subscriber to Every to unlock the rest of this piece and learn about:
- How Google's Search product provides creative constraints that propel the company forward
- How the company plans to build its AI ecosystem toward a more human future
- Why AI will act as leverage for human experts, rather than replace them
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Comments
Don't have an account? Sign up!
While I share an interest in and some hope for a techno-utopian future, I am not sure that having Google lead it is particularly encouraging to me (a decade ago when "Don't be evil" was still a thing I might feel differently 😄). I have a hard time swallowing the vision of a remade Google barely a few years out from some of its biggest (and some still ongoing) blunders and "inhumane" choices. Killing well-loved and not terribly expensive (to Google) products is just one example of that. It was a year ago, less from some perspectives, that Google seemed deeply behind in the AI race, and I'm confident that wasn't *just* because of what they hadn't shown yet. They clearly accomplished great things in righting the ship on their AI research since then, but they were definitely caught by surprise by OpenAI and others for a while and were struggling to catch up. It's great that they're doing well now, but it will take some consistency of such execution and output for me to have faith they can sustain it.
The "human" element of the whole show is an interesting angle. I don't want to trivialize the feelings you and perhaps others had (including Hassabis on stage), those feelings are real and there *is* something truly significant going on that is worth having strong feelings about. The potential is tremendous. But the environment of a conference itself, that very social, in-person nature of it, and the excitement of all this wondrous new technology, all of that is a multiplier for such feelings. And a skilled company - and its leaders - can make you feel powerfully positive and hopeful in that moment, yet still be speaking on behalf of and leading a company that ultimately regresses to the Capitalistic baseline: exploit opportunities (humans are at the root of most/all of them), make money, maintain that by whatever legal and sometimes pseudo-legal means possible. How do we profit off of the human, the creative, the personal, and what are the downsides of that? I don't think we can be genuinely hopeful about a future led by any company without understanding the latter part of that question much more fully.
@Oshyan Your point is a good one and I'm really glad you brought it up. It's also not lost on me, which is why I tried to hedge a bit by calling out explicitly that they are an extremely profitable capitalistic organization and some of their shortcomings from the past.
With that said, I think the reality of the situation is there are probably only a handful of companies that realistically have the chance to create by their definition of the term. And it was really reassuring to me to see that many of the people involved, from the technical point of view, seem to be doing it for the right reasons AND have enough leadership backing currently, to do it in a way that they believe is right.
I think this is really important because even if they(Google leadership) stop doing that in the future, the technical knowledge of how it was accomplished will reside within these individuals that are clearly idealistically motivated and increasingly less financially motivated as they've accrued significant wealth and will continue to do so.
I really welcome the open sourcing of models like Gemma, the willingness to share experiments in their labs and invite experts to the table, and just how cheap / accessible their technology is generally at the moment. This is a pretty stark difference when you compare them to, say, Apple, one of the other few companies with the resources and platforms needed to accomplish this goal, who has, by and large, stuck to their historical precedent of secrecy and closed off-ness
We'll continue to keep an eye on this closely and share what I learned! Please keep the feedback coming
Agree and made me buy more shares
Thank you for this emotional yet enlightening report, Alex!
That moment with Hassabis sharing about his grandmother hit me too. There's something happening here that goes beyond the usual tech demos - Google seems to be figuring out that the human relationship with AI matters as much as the capabilities themselves.
Oshyan's point about the "human element" gets at something I've been thinking about: we're not just watching AI get more powerful, we're watching the first hints of what beneficial AI alignment might actually look like in practice. Hassabis saying AI should "amplify what makes us human, not replace it" isn't corporate speak - it's a design philosophy that could make or break how this all turns out. And he keeps repeating these deeply rooted convictions in all public statements and interviews - making him a role model and beacon among the "AI power holders / elite" imho.
What especially got to me was your description of people at tables needing to process what they'd seen out loud. Because that is exactly my "current mode of coping". That's not just excitement about new tech - that's humans recognizing we're in the middle of something unprecedented and trying to figure out what it means for us. The very idea of my children growing up inside synthetic agency loops, of truth fracturing, of attention dissolving into dreamworlds overwhelms me. It is live witnessing my own "assumptions of what is coming" come true faster than I have hoped deep inside, like seeing the expected storm touch down on my own roof. Even if you’re wearing armor (here: you are "up-to-speed" and have figured out the new modus operandi), your loved ones might not be. And no model prepares you for that realization.
The vulnerability, the artist collaborations, the emphasis on domain experts - it all points toward AI development as relationship-building rather than just capability-scaling. It's making me think about approaches like the Parent-Child Model ( https://bit.ly/PCM_short ) for AI alignment - where instead of trying to control superintelligence through constraints, you raise it through trust, reverence, resonance and developmental scaffolding. Fortunately, Google's approach feels like steps in exactly that direction.
Your closing thought about AI helping us "think bigger" captures what I hope we're moving toward. Not just smarter tools, but technology that actually enhances rather than diminishes what makes us human. Question: Do you think this human-centered approach is Google's competitive differentiator, are we even seeing the early signs of what all AI development will have to become in general - or will this (at least now human / humanity-centerd looking) approach be a burden in the race for AI supremacy, slowing them down?