
Thank you to everyone who is watching or listening to my podcast, How Do You Use ChatGPT? If you want to see a collection of all of the prompts and responses in one place, Every contributor Rhea Purohit is breaking them down for you to replicate. Let us know in the comments if you find these guides useful. —Dan Shipper
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Movie-making has historically been prohibitively expensive. If you wanted to make a Hollywood movie in 2012, you’d likely need high-end cameras, a sound system with too many buttons, and maybe even a friend who knew Christopher Nolan personally. Even the cheapest films to make are still pricey for the average person: The Blair Witch Project and Paranormal Activity cost about $200,000 each after post-production.
But AI is dramatically altering the cost of filmmaking. And AI tools have become a flywheel, broadening our ability to bring our own ideas to life.
Film and commercial director Dave Clark broke into Hollywood earlier this year with the sci-fi short film Borrowing Time, which he made on his own on his laptop, with AI tools like Midjourney, the text-to-video model Runway, and the generative voice platform ElevenLabs.
Clark talks about how he managed this in a recent episode of How Do You Use ChatGPT?, where he makes a movie live on the show with Dan Shipper. He goes as far as to say that he couldn’t have made the three-minute-long feature without AI—it would have been expensive, and Hollywood would never have funded it. But after his short went viral on X, major production houses approached him with offers to make it a full-length movie. Clark had his proof of concept.
In this interview, Dan and Dave delve into the realm of AI tools that generate images and videos, exploring how budding filmmakers can leverage them to come up with ideas, test these concepts, and perhaps even gain enough traction to secure funding.
I’m not aspiring to break into Hollywood—but I came away from this episode amazed at what AI is capable of, and inspired to use it to put my words into action.
Read on to see how Dan and Dave use AI tools to make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career. They use a custom GPT that Clark has trained on his own creative style. The GPT—called BlasianGPT to represent Clark’s Black and Asian heritage—is a text-to-image generator.
First, we’ll give you Dan and Dave’s prompts, followed by screenshots from BlasianGPT and other AI tools employed to make the movie. My comments are peppered in using italics.
Dan and Dave: Create an image of Nicolas Cage getting sworn in as president. The image should be cinematic and gritty in the style of the David Fincher film Se7en.
All screenshots courtesy of Dave Clark and How Do You Use ChatGPT.Clark comments that BlasianGPT understood the vibe they were trying to achieve, pointing out that the image had “perfume lighting,” a technique used by the director of Se7en. A little trivia for all the movie buffs out there: Clark explains that the term “perfume lighting” has its roots in advertisements for perfumes in the 1980s, which typically used high-contrast visuals in an attempt to create an element of drama.
Dan thinks the image looks like a Nicolas Cage-Harry Potter crossover. Even though BlasianGPT didn’t effectively capture the setting of a presidential swearing-in ceremony, he thinks the picture is interesting. Intrigued, Dan wonders what book Nicolas Cage is holding in his hand.
Dan and Dave: What is the book Nicolas Cage is holding in his hand?
BlasianGPT plays it safe, drawing from the original prompt to answer that the book is either a constitution or the Bible. However, Dan and Dave want to pull on a more exciting thread.Dan and Dave: Nicolas Cage is holding the Book of the Dead, and he opened it and made a deal with ghosts to resurrect his career. Show me the next scene in aspect ratio 16:9.
Dan and Dave are impressed by the cinematic quality of the image BlasianGPT generates, and are already looking forward to animating them using other AI tools. For the moment, however, they continue generating images for the movie. Clark says he is reminded that working with text-to-image GPTs is like having an active collaborator while telling a story, an advantage they have over Midjourney, which doesn’t have a chatbot.Dan and Dave: After this scene, show how Nicolas Cage has to complete his final mission: The ghosts demand that Nicolas Cage go to the Luxor and steal a cursed relic, which is actually a roulette ball. It is currently in the hands of a top boss who is [Ocean’s Eleven antagonist] Terry Benedict. Show this image in 16:9 aspect ratio.
Thank you to everyone who is watching or listening to my podcast, How Do You Use ChatGPT? If you want to see a collection of all of the prompts and responses in one place, Every contributor Rhea Purohit is breaking them down for you to replicate. Let us know in the comments if you find these guides useful. —Dan Shipper
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Movie-making has historically been prohibitively expensive. If you wanted to make a Hollywood movie in 2012, you’d likely need high-end cameras, a sound system with too many buttons, and maybe even a friend who knew Christopher Nolan personally. Even the cheapest films to make are still pricey for the average person: The Blair Witch Project and Paranormal Activity cost about $200,000 each after post-production.
But AI is dramatically altering the cost of filmmaking. And AI tools have become a flywheel, broadening our ability to bring our own ideas to life.
Film and commercial director Dave Clark broke into Hollywood earlier this year with the sci-fi short film Borrowing Time, which he made on his own on his laptop, with AI tools like Midjourney, the text-to-video model Runway, and the generative voice platform ElevenLabs.
Clark talks about how he managed this in a recent episode of How Do You Use ChatGPT?, where he makes a movie live on the show with Dan Shipper. He goes as far as to say that he couldn’t have made the three-minute-long feature without AI—it would have been expensive, and Hollywood would never have funded it. But after his short went viral on X, major production houses approached him with offers to make it a full-length movie. Clark had his proof of concept.
In this interview, Dan and Dave delve into the realm of AI tools that generate images and videos, exploring how budding filmmakers can leverage them to come up with ideas, test these concepts, and perhaps even gain enough traction to secure funding.
I’m not aspiring to break into Hollywood—but I came away from this episode amazed at what AI is capable of, and inspired to use it to put my words into action.
Read on to see how Dan and Dave use AI tools to make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career. They use a custom GPT that Clark has trained on his own creative style. The GPT—called BlasianGPT to represent Clark’s Black and Asian heritage—is a text-to-image generator.
First, we’ll give you Dan and Dave’s prompts, followed by screenshots from BlasianGPT and other AI tools employed to make the movie. My comments are peppered in using italics.
Dan and Dave: Create an image of Nicolas Cage getting sworn in as president. The image should be cinematic and gritty in the style of the David Fincher film Se7en.
All screenshots courtesy of Dave Clark and How Do You Use ChatGPT.Clark comments that BlasianGPT understood the vibe they were trying to achieve, pointing out that the image had “perfume lighting,” a technique used by the director of Se7en. A little trivia for all the movie buffs out there: Clark explains that the term “perfume lighting” has its roots in advertisements for perfumes in the 1980s, which typically used high-contrast visuals in an attempt to create an element of drama.
Dan thinks the image looks like a Nicolas Cage-Harry Potter crossover. Even though BlasianGPT didn’t effectively capture the setting of a presidential swearing-in ceremony, he thinks the picture is interesting. Intrigued, Dan wonders what book Nicolas Cage is holding in his hand.
Dan and Dave: What is the book Nicolas Cage is holding in his hand?
BlasianGPT plays it safe, drawing from the original prompt to answer that the book is either a constitution or the Bible. However, Dan and Dave want to pull on a more exciting thread.Dan and Dave: Nicolas Cage is holding the Book of the Dead, and he opened it and made a deal with ghosts to resurrect his career. Show me the next scene in aspect ratio 16:9.
Dan and Dave are impressed by the cinematic quality of the image BlasianGPT generates, and are already looking forward to animating them using other AI tools. For the moment, however, they continue generating images for the movie. Clark says he is reminded that working with text-to-image GPTs is like having an active collaborator while telling a story, an advantage they have over Midjourney, which doesn’t have a chatbot.Dan and Dave: After this scene, show how Nicolas Cage has to complete his final mission: The ghosts demand that Nicolas Cage go to the Luxor and steal a cursed relic, which is actually a roulette ball. It is currently in the hands of a top boss who is [Ocean’s Eleven antagonist] Terry Benedict. Show this image in 16:9 aspect ratio.
Dan and Dave appreciate the rich detail in BlasianGPT’s image, like the sculpted pharaoh faces staring down on Nicolas Cage as he paces through the casino halls (even though they aren’t sure if the Luxor Hotel & Casino actually has these decorations!). Clark thinks they should generate one final shot before jumping into Midjourney to create different versions of the same images. He takes a moment to remind BlasianGPT to create all subsequent images in a 16:9 ratio, and they get back to making their movie.Dan and Dave: Show an image of the cursed roulette ball spinning around the roulette wheel on a table in the crowded Luxor Casino.
Dan is keen to visualize the cursed roulette ball and proposes asking BlasianGPT for a closeup. While agreeing that getting additional camera angles is a good idea, Clark recommends using short prompts for text-to-image GPT instructions like this.Dan and Dave: Show us an extreme closeup angle of the cursed roulette ball.
As they examine the intricate symbols on the roulette ball, Dan and Dave wonder how exactly the ball is cursed. They decide to pick BlasianGPT’s brain about the lore of the cursed roulette ball.Dan and Dave: Tell us in which way the roulette ball is cursed and how it relates to the ghosts.
BlasianGPT doesn’t disappoint! It generates a detailed story tying together Nicolas Cage, the supernatural entities in the first few images, the cursed roulette ball (apparently an ancient Egyptian curse), and the Luxor.Now that BlasianGPT has generated a story and a few shots for their movie, Clark thinks it's time to prompt Midjourney to see how the images it generates compare with BlasianGPT. Acknowledging that Midjourney requires more detailed instructions, Clark uses the GPT to come up with an effective prompt for Midjourney.
Dan and Dave: Give me a detailed prompt to use in Midjourney that will get us an image similar to the closeup of the cursed roulette ball.
Clark pastes this detailed prompt into Midjourney as is, only adding “--ar 16:9” to make sure the tool generates images of that specific aspect ratio.Dave and Dan think the images generated by Midjourney are more cinematic than the ones BlasianGPT developed. They upscale the image to make it bigger and clearer, and use the “vary strong” option to create noticeably different variations. Here’s one of the variations Midjourney generated.These images are interesting, but Dan and Dave decide to stick with the original version.
As a quick recap, Dan and Dave have generated a story and images using BlasianGPT, and made a few more shots in Midjourney. The next part of their movie-making process is animating a few of these images in Runway to create a movie clip. Clark remarks that Runway is especially useful if you’re operating under a time crunch because it offers the option to generate text-to-image-to-video on a single platform.
On Runway, Clark likes using Motion Brush, a tool that allows you to choose a brush and paint over areas of an image you want to bring to life, while also specifying the kind of movement you want it to have. This is the image he decides to start with.
Clark uses four brushes to bring the picture to life—one each for Nicolas Cage, the supernatural entities surrounding him, the white light at the top of the image, and the candles. He also adjusts the parameters to determine the direction and type of movement he prefers. This is how the image looks after Clark has used the Motion Brush.With that squared away, Clark generates five variations of the image with the intent of having more options to choose from. He also uses another parameter—camera motion—to create the effect of “zooming out” on one of the images.
Dan and Dave like the animations created using Runway’s Motion Brush. They especially enjoy the very first version it generated. Clark mentions that he typically selects the elements he likes best from each of the variations, and blends them after to create the clip he wants.
After landing on an animation of Nicolas Cage, Dan and Dave turn their attention to animating the roulette ball.
Dan has a question about how to animate the roulette ball because it doesn’t have an obvious motion element. Clark explains that he would play with camera control and motion control, some of Runway’s more traditional features, to bring the roulette ball to life. He adds a text prompt in the image description box, instructing the tool to make the center of the roulette ball glow red in color. Clark also adjusts the camera controls to add the effects of zooming in and the camera tilting.
Runway generates incredible images, visualizing the roulette ball as a haunted orb that glows red.
Clark explains that the next step in his movie-making process would be to take the image into an upres-er like Topaz Labs to upscale the video quality. According to him, creating great results from AI tools boils down to generating multiple variations and consistently experimenting with the technology.
All this to say that when the initial output from an AI tool—whether it’s an image, the first draft of an essay, or an animated clip—doesn’t meet your expectations, don’t let it discourage you from pursuing the project. Instead, click on the re-generate button, and give it another go.
Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools