What I Learned Teaching 100 People To Code with AI

I guess I’m a programming teacher now

James Harrison / Unsplash

Sponsored By: LeverTRM

Today's essay is brought to you by LeverTRM. Empower Your Recruitment and Hiring with LeverTRM.

Last month I launched a cohort-based course through Every: How to Build an AI Chatbot. The idea was to teach people how to make a GPT-4 chatbot by programming with ChatGPT over the course of a month. 

AI is a hot topic, and my articles on it have done well. I felt like I had something genuinely important to teach. So, I had high expectations for the course. But I was completely unprepared for the level of response we got.

We got a trickle of signups in the first hour. But over the course of the next few days—the trickle kept coming, and coming, and coming. I watched the number of registered students go from 5 to 10 to 25.

By the 80th student, we shut off signups. We gave out 20 scholarships to students who couldn’t otherwise afford a seat, and I started to prepare to teach. My terror that no one would show up was replaced by something else: Will anyone like it? More importantly, will they truly learn something? Will it be valuable?

Over the last month, I’ve had the absolute honor of watching almost 100 people get their feet wet building with AI. There were big WOW moments—like when students got their first chatbot running in the initial lecture, or when they suddenly understood an AI term they’d been hearing about forever like “embeddings”. There were also some bumps and bruises along the way—it turns out, AI is still unreliable at programming and that I personally have some ways I can improve as an instructor.

But the incredible thing about courses like this is that if you’re doing it right, you learn just as much as your students do. That was certainly the case for me this month. It was a huge pleasure to help so many people get their feet wet building with AI—and it was a crash course on being a programming teacher in this new world.

Here are a few things I learned.

LeverTRM is a cloud-based Talent Relationship Management platform that offers ATS+CRM functionality in one, making it easy for companies to manage candidate relationships and streamline their recruitment and hiring process.

With a powerful automation hub, LeverTRM takes care of everyday tasks, freeing up time for recruiters to focus on finding the best-fit talent for their company. See LeverTRM in action today and ask about Lever's NEW Candidate Texting!

AI is a cheat-code level powerup for creativity

I went into this course believing that AI is a video game cheat code powerup for creativity that can take people who are only marginally technical and turn them into builders who can suddenly bring their ideas to life. All they need is a little push.

I believe that even more fervently now. I watched people in this course go in with only basic technical skills and have immediate WOW moments that stoked their creativity and sense of agency. In the first class everyone made a basic conversational chatbot that looked like this:

At the beginning of class, I demoed the bot and stepped through sample code to explain how it worked. Then I live-coded the bot from scratch using ChatGPT so students could see how to properly prompt it to get the code they needed. Then I set them loose to use ChatGPT themselves to code their own bot. It was risky—the class included people from all different programming skill levels. But it turned into a huge moment for a fair number of them. Here are a few things people said in the chat as they were working through the exercise:

“It worked!!! I'm in shock right now ;)”

“It's working! It feels like cheating—I barely did anything.”

Programming with AI for the first time is like that. It sort of feels like that moment in The Matrix where Morpheus downloads martial arts skills into Neo’s brain. Neo wakes up and suddenly knows kung fu. 

Getting people to that initial feeling of creative agency with AI is great because it is real. They suddenly realize that they can do much more than they thought they could if they use AI to power their workflow. 

But it’s also good for a deeper reason: it helps motivate understanding.

AI makes learning to code way less intimidating and abstract

The problem with programming, in my opinion, is that you have to climb a steep hill in order to do anything interesting with it. People come to learn programming so they can build apps, websites, video games, and other fun creative stuff. But instead of learning how to do those things, they spend their first few months learning about variables, arrays, functions, and loops. 

The initial building blocks of programming seem so abstract and disconnected from the actual goals that students have that many drop out quickly. It’s sort of like taking guitar lessons so you can learn to shred like John Mayer, only to learn that you need to practice scales for 6 months before you start learning a single song.

And this is not just limited to general programming concepts. AI has its own language, too. Large language models, embeddings, vector databases, attention…the list goes on. 

It’s overwhelming! And it selects for people who have a natural aptitude for abstraction or who are persistent and self-motivated enough to get through it.

AI lets people build a Day One project that looks a lot like the app, a website, or a game they’ve been dreaming about. These projects are deeply connected to their initial reasons for learning to program in the first place. And once they’ve done that, it provides sufficient motivation to go deeper such that technical topics like client-server interactions seem more appealing to learn—because you can connect to why they’re important.

It also makes it a lot easier to understand AI-specific vocabulary like embeddings when you have hands-on experience you can refer to. I think a lot of people came into the course hungry for an opportunity to finally understand AI’s terminology—and I think some of the most effective moments in the course were when one of those concepts clicked.

All of this has an important impact: it makes more people who can count themselves as “technical”.

There’s an expanding frontier of who counts as “technical”

There are about 132,000 professional programmers in the United States according to the Bureau of Labor and Statistics. About 100,000 students majored in computer science in 2017—so there’s a huge proportion of people who have taken a few comp sci classes but didn’t end up becoming professional programmers. 

These are people who understand some basic concepts but don’t really consider themselves “technical”. Some of them are product managers who work with engineers every day but aren’t in the code themselves. Some of them ended up at banks working on complicated Excel formulas. Some of them dropped programming entirely and ended up doing something totally different.

All of them have dreams, and AI means all of them are 10x more capable of building those dreams today than they were a year ago. They just don’t realize it yet. 

That’s who I was trying to reach with this course. And after teaching it this month, I feel confident that the basic programming knowledge you pick up with a few college courses is enough to get a web app running with ChatGPT in a few hours if you prompt it correctly. It doesn’t bring you to the level of a professional programmer—but it does make you productive enough to be dangerous. And that’s just what’s possible today

In the future, I think the frontier of people who can use AI to become builders will expand even further. For example, there are many more people who know how to use no code tools like Notion, or Squarespace, or Bubble than there are people who understand underlying programming concepts. Those people will suddenly be able to build interesting things with AI, too.

But it’s not all sunshine and rainbows. There are problems here too.

Even if you’re coding with AI, you still need basic programming knowledge

AI isn’t good enough yet to reliably avoid or fix its own programming mistakes. There are certain areas where it’s quite good and others where it falls flat on its face. 

A programmer with a little bit of experience can learn to fill in the gaps for ChatGPT, but it’s much harder for people with less technical experience to manage this. You can easily get into a frustrating rabbit hole of prompting the model to do something for you, finding that it fails, and prompting it to fix things with little success.

We definitely ran into this during the course, and I think it was partially driven by my own blindness to what counts as “basic”. I’ve been programming for about 20 years at this point, and so at the beginning, I was unaware of how complicated what seemed like “basic” tasks would be. For example, I knew that I’d need to explain what embeddings were to students, or why GPT models tend to hallucinate. But I sometimes neglected to explain basic terms like API key or environment variable—and these understandably got confusing for people without a programming background. 

Writing that now, I feel embarrassed that I didn’t realize that concepts like that needed to be explained thoroughly. It made the first lecture very up and down. As I mentioned above, some students were having WOW moments getting their bots to work with just a few prompts, and some students ended up feeling left behind because they got stuck on steps of the process that I hadn’t fully explained. 

I was able to dial in on what was truly “basic” a bit better over the ensuing lectures, but I kept having moments like this over the course of the class. 20 years of intuition and knowledge built up does make a significant difference in the amount of leverage you can get out of these models to do programming tasks, and it is easy to get lost if you don’t have that to fall back on. 

What this means is that effective modern programming education weaves in both: 1) teaching students to prompt models to do programming tasks for them and 2) teaching them to understand the code the model is generating.

If you don’t have number two, it’s too easy to get lost. This is especially true if you don’t have access to the latest models.

The haves and have-nots of AI programming

The simple truth of the matter is that GPT-4 is significantly better at programming than GPT-3.5. 

GPT-3.5 is like a caterpillar and GPT-4 is like a butterfly. GPT-4 with web browsing is like you took your cute little butterfly and fed it some growth hormone. 

There are some simple reasons for this. GPT 3.5 is much more likely to have bugs in its code and to make stuff up than subsequent models. GPT-4 is much better about these issues. However, its knowledge cutoff (it doesn’t know about anything that happened beyond September 2021) means that if you’re working with programming libraries that were released after that date—you’re going to have a hard time. For example, GPT-4 doesn’t know that GPT-4 exists. So if you ask it to code a bot that uses the GPT-4 API, it won’t be able to do it.

This made the course somewhat hard to teach because the set of tools that were available to each student was different—and the tools they could use significantly augmented (or detracted from) their ability level.

This created a sort of haves and haves nots atmosphere in class, where you could be significantly more productive if you were able to pay for access, or you were one of the lucky few who had access to a new alpha feature like web browsing. I tried to ameliorate this by making lessons doable no matter what your access level was, and by giving out GPT-4 API keys to students, but that didn’t totally solve the problem.

Watching this unfold underscored how different the world is for the creative ambitions of builders who have access to the latest models. And it also made me excited for a time when this technology is more generally available. 

In a year, I think everyone will be able to use a GPT-4 quality model that has access to the web and to the code they’re writing. This will eliminate the have and have-not dynamic and also dramatically broaden who is able to build things with AI. 

I think it’s crucially important that that happens.

What the future looks like

I can imagine a skeptic reading this article and saying something like, “Sure you can get people to make toy projects with AI, but it won’t make them a real programmer.” 

This is a familiar sentiment—C nerds used to sniff about how Python programmers aren’t real programmers because they don’t have to do memory management. C nerds were definitely right in certain ways. It is true that learning to do memory management makes you a better programmer. But do today’s Python programmers truly need it? Probably not.

I think the same is true in the case of using AI to help you program. It makes a world of difference to have a deep understanding of underlying programming concepts when you’re prompting the model to help you build things. 

But AI can be an effective tool to help you motivate yourself to get that understanding. And, as models improve, you may not need nearly as much understanding of the fundamental implementation details as you did 20 years ago. 

Here’s the crux of all of this though:

When I learned to program I did it with a book. I used Sam’s Teach Yourself C++ in 21 Days, and let me tell you—that was a frickin’ lie.

I had to sit in my bedroom and manually type instructions from the book into my computer. When something didn’t work, I had to pray that the answer was somewhere in the text. If it wasn’t, I’d have to hunt through janky vBulletin forum posts and hope to pick up the scent of someone who’d solved a similar problem. It was very frustrating, and very time-consuming. It probably took me 6 months to fully understand what was in it.

The dream that my Sam’s book was selling was that I could learn to code in 3 weeks. That was clearly impossible 20 years ago. It’s still impossible today. 

But it is 100% possible to build something awesome with code in 3 weeks today using AI. And that is a gigantic opportunity for anyone with creative ambition.


I’ll be teaching future cohorts of the course starting in the fall. If you’re interested, sign up here.

Like this?
Become a subscriber.

Subscribe →

Or, learn more.

Thanks to our Sponsor: LeverTRM

Thanks again to LeverTRM, today's sponsor.

LeverTRM is known for its innovative, candidate-centric approach to recruitment and hiring. Book your demo today and see how Lever can transform your recruitment process, enhance your team's reach, insight, and proactivity, and help you connect with top talent.

Read this next:

Chain of Thought

How Hard Should I Push Myself?

What the science of stress tells us about peak performance

2 Oct 17, 2023 by Dan Shipper

Chain of Thought

How Sora Works (and What It Means)

OpenAI's new text-to-video model heralds a new form of filmmaking

1 Feb 16, 2024 by Dan Shipper

Chain of Thought

AI-assisted Decision-making

How to use ChatGPT to master the best of what other people have figured out

6 Oct 6, 2023 by Dan Shipper

Chain of Thought

Transcript: ChatGPT for Radical Self-betterment

'How Do You Use ChatGPT?’ with Dr. Gena Gorlin

🔒 Jan 31, 2024 by Dan Shipper

Napkin Math

Profit, Power, and the Vision Pro

Will Apple’s new headset change everything?

5 Feb 6, 2024 by Evan Armstrong

Thanks for rating this post—join the conversation by commenting below.

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@hailuelectro00 11 months ago

Goox

Every smart person you know is reading this newsletter

Get one actionable essay a day on AI, tech, and personal development

Subscribe

Already a subscriber? Login