AI Works Better When You Make It Pretend
Why role prompting your LLM yields the response you want
August 12, 2024
In Michael Taylor’s work as a prompt engineer, he’s found that many of the issues he encounters in managing AI tools—such as their inconsistency, tendency to make things up, and lack of creativity—are ones he used to struggle with when he ran a marketing agency. It’s all about giving these tools the right context to do the job, whether they’re AI or human. This piece is the latest in his series Also True for Humans, about managing AIs like you'd manage people. Michael explores role prompting, a technique where you ask an LLM to role-play as a celebrity or expert in a specific field. Role prompting communicates to your AI coworkers what style of response you want—helping them better meet your subjective expectations.—Kate Lee
During my first job out of college, I played on an office soccer team. One of my teammates was a former soccer pro. He would run circles around everyone else on the field, and whenever he deigned to pass the ball to one of us mortals, we’d inevitably mess up the opportunity. In his frustration, he would always say, “Just be better.” It became a running joke, because, of course, you can’t make your teammates better just by telling them to.Except now you can—that is, if your teammate is an AI. Telling ChatGPT, “You are an expert at [relevant field],” regularly leads to notable performance gains.
If you’ve spent any time working with AI tools, you will have likely encountered examples of people asking the AI to role-play as an expert or celebrity in their prompts. After all, who wouldn’t want an AI version of Steve Jobs to help them brainstorm product ideas, or an AI Albert Einstein to help them do their homework?
The fact that by simply telling an AI to get in character, it gains new functionality associated with that persona, feels like magic. It’s reminiscent of a scene in The Matrix where Neo (played by Keanu Reeves) instantly learns to fight by downloading a program into his brain.
Source: The Matrix.Just like a human actor delivering their lines after getting in character, AI assistants can play their part better when they know what role you want them to play. Most of the scientific papers exploring role prompting focus on improving LLM’s math scores, but in my experience, role prompting works best when two conditions are met:
- What makes a good answer is subjective.
- There’s a specific style you’re hoping to emulate.
Let’s review the science behind role prompting to understand why it works and look at a few examples of how to apply it. It’s one of the quickest and easiest ways to get the results you want out of AI—so long as you know what role you want it to play.
Helping AI get in character
Become a paid subscriber to Every to learn about:
- The science behind getting AI "in character"
- The subjective power of personas
- Testing the limits of AI role-play
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community