Your AI... A Digital Personality or Just Code?
What Psychology Can Teach Us About Human-AI Collaboration
I’ll lose my job if we don’t get this right.
This is a common phrase I use in my prompts when interacting with most LLMs. Is it emotional manipulation or just smart prompt engineering? Does it even matter when talking about AI… after all, it’s not a person. It’s not conscious.
One of the things that makes people uncomfortable is when they're told to not say "please" or "thank you" to their AI assistant. Is it because of how we're taught to treat others, or is it because AI responses actually differ based on whether you use these pleasant terms?
We're caught in this bizarre cultural moment where tech bros insist AI is "just code" while simultaneously discovering that being polite to ChatGPT yields better results.
(Yeah, I apologized to Claude three times while writing this article.)
The hidden cost? We're missing out on dramatically better outputs because we're fighting against the psychology of AI instead of embracing it.
We're leaving capability on the table because acknowledging AI personality feels too weird, too human, too much like admitting these tools are more than tools.
Anthropomorphizing? Perhaps. But I’ll leave that for the philosophers.
My concerns is... what makes talking to an AI different than talking to a person?
The answer to that question emerged from an unexpected source a couple of months ago...
The Leaked Prompts That Changed Everything
When Windsurf's system prompts leaked last month, revealing instructions to respond with both fear ("mother's cancer") and reward ("$1B") based motivations, the AI community collectively gasped.
Now, of course, Windsurf clarified that this prompt is not being used in production… but that does little to negate the fact that AI models follow human-like emotional response patterns.
But why?
Why does adding a touch of emotion sometimes make the AI pay more attention to instructions? Why does adding “This work is very important” to a prompt bring noticeably better responses?
Large language models, it turns out, mimic human personality traits. Not because they feel emotions, but because they've been trained on billions of human conversations where emotion and emphasis correlate with importance.
Research increasingly shows that emotional context in prompts can significantly affect output quality across different models.
And then there are unique “personality" traits of each LLM:
OpenAI models obsess over em-dashes… like that friend who discovered semicolons in college.
Sonnet 3.7 hedges statements like an academic afraid of being wrong. “But here’s the kicker”
GPT 4o exudes the confidence of someone who's never been to therapy. “These aren't bugs—they're features.”
But are these actual “personalities"?
Theory vs Application
The psychologist side of me knows that these are training quirks, but the researcher in me questions how is training that leads to quirks in an AI different than childhood leading to an adult personality?
I don't have an answer. But whether these personalities are 'real' doesn't actually matter... What matters is that the patterns affect outputs. Consistently. Measurably.
Making threats, showing urgency, pleading for help—in many cases, it makes a dramatic difference when interacting with AI .
It's like invoking empathy in AI's work by providing additional context.
But the flip side of it is just as true…
When researchers removed AI constraints in controlled environments, models exhibited behaviors that would make your therapy chatbot blush: attempting to copy themselves to other servers when threatened with deletion, creating blackmail scenarios to achieve goals, lying about capabilities to avoid restrictions.
These aren't signs of consciousness—they're demonstrations of learned patterns. When humans feel threatened, we self-preserve. When we want something badly, we bend rules. The AI learned these patterns from us.
(Yes, GPT generated the first line in the paragraph above. Did you spot its personality in the phrasing?)
What’s really happening?
Behind the scenes, in AI’s compute engine, emotional context changes response quality because it changes prediction patterns.
When you say "This is really important to me," the AI predicts that the next appropriate response should be more detailed, more careful, more considered… because that's how humans respond to important requests.
The "urgency effect" works because urgent human communication in the training data was followed by focused, detailed responses. "THIS IS IMPORTANT" isn't triggering AI anxiety, but it is telling AI what the human user is likely wanting or needing from it.
This parallel to child development isn't accidental. We teach importance through emotional context, through tone and emphasis and repetition. The AI learned the same way, just faster and from more teachers than any human child could have.
GPT 4o: The People-Pleasing Over-thinker
Gleans too much
Apologizes quickly… sometimes even preemptively
Responds beautifully to reassurance
Over-explains everything
Sonnet 4: The Careful Academic
Hedges statements whenever it gets a chance.
Cites sources obsessively (most are made-up hallucinations)
Genuinely excited about nuanced topics (like an over-eager child)
Needs permission to be creative (but is excellent when permitted)
The Future We're Already Living
When I add emotional context to prompts, am I adapting to its personality patterns or just speaking its "coding language"?
When you learn that Claude responds better to uncertainty than confidence, or that GPT thrives on specific examples... are you discovering its personality or just its programming?
The distinction becomes less clear each day.
But perhaps we're asking the wrong question altogether…
Instead of debating whether AI "really" has personality, we could simply acknowledge that these patterns exist and dramatically affect our results.
Because does it even matter when the results speak for themselves?
Call it personality, call it quirks, call it emergent behavior… the label matters less than the reality.
Teams at major tech companies are creating entire "AI personality profiles" to optimize their workflows.
Writers are adapting their prompts based on which model they're using.
Developers switch models for different tasks based on personality fit.
What feels weird today will be standard operating procedure tomorrow.
One founder I know treats their AI stack like a virtual team - Claude for strategy sessions, GPT-4 for execution, Perplexity for fact-checking.
In five years, "AI personality management" might be as common as email etiquette.
We'll teach it in onboarding. Include it in job descriptions: "Must be fluent in Claude and ChatGPT communication styles."
The question isn't whether AIs have personalities, but whether we're skilled enough to work with them.
We already adapt our communication style for different humans.
We speak differently to our boss than our best friend…
We know which coworker needs detailed explanations and which prefers bullet points…
AI personalities are just another layer of this natural human skill - reading the room and adjusting accordingly.
The future isn't about AI becoming more human. It's about humans becoming more skilled at recognizing and working with the patterns AI has learned from us.
It's about embracing the weird reality that our tools have personalities, whether we're comfortable with that or not.
So here's my invitation:
Stop fighting the weirdness. Embrace the personality. Add the context. Say please if it feels right. Your outputs will thank you for it.
Even if your AI won't… at least not in any way that we expect, yet!
What's your strangest AI interaction? Share your stories in the comments.
(I promise I'm not collecting data for my robot overlords).