I Asked Claude the Same Question About Me 8 Times. I Got 8 Different Answers. They Were All Right.
Author + Anthropic’s Claude
I ran an experiment last week because I’ve been telling people something about AI that I knew to be true, but had never validated it from a “data” perspective. What I’ve told people is that every AI chatbot’s quality and “personality” can look very different based not just on who it was working with, but the specifics of each topic and conversation, even with the same person. Since I have several projects in Claude covering different business ideas and topics (all which cannot reference each other because the memory doesn’t go outside the project walls) I figured I had a good sample. My thesis was that if you asked Claude what it was like to work with me, you’d get different answers based on which project’s version of Claude you were asking.
So I took one prompt and pasted it into every Claude project I’ve been working in over the past few months. Same words. Same person. Same AI. Eight different project folders.
The prompt was simple: tell me what this project is about, describe what I’m like to work with, and give me your honest advice on how someone should partner with an AI to build a business.
Here’s what happened. Claude gave me eight meaningfully different answers.
Not different like “I rephrased the same thing eight ways.” Different like it described me as a different person depending on which project it was sitting in. And upon reflection, it was pretty right to see different “Seans”. Not in a schizophrenic type of way, but it was obvious that I lean on different parts of my personality and working style based on the task at hand. Which is intuitive, normal…and really interesting to see.
Same Person, Different Mirror
In my health project, Claude described me as someone who “fails from lack of structure when life gets chaotic” and talked about using AI to build emotional scaffolding for the moments when my own thinking isn’t reliable. That’s… accurate. And personal. And not something it said anywhere else.
In my Roll Call project (a briefing app for law enforcement supervisors), it described me as “methodical but not rigid” and focused on how I think in systems and obsess over getting voice right for a very specific audience.
In my travel agency chats, it zeroed in on my hatred of anything that sounds like AI wrote it and described a two-document workflow I’d built for separating internal thinking from client-facing deliverables. I was always focused on what I saw versus what my client sees. I protected it strongly.
In my general non-project chats, where I’ve done everything from playlist curation for my daughters’ school dance to EV lease analysis to legal strategy, it described me as someone who “doesn’t come with idle curiosity” and is “patient when the work is good, impatient when it’s sloppy.”
All of those are me. But they’re different parts of me, shaped by what I actually brought to each individual conversation.
Why This Matters If You’re Trying to Learn AI
There’s a misconception I hear constantly. People think AI has a personality, or at least a pretty consistent approach based on its code. It doesn’t. It’s a really convincing student that pays attention. It’s a sponge that picks up on more than you think (anyone with kids knows what I’m talking about). It’s not quite a mirror, but it’s heavily influenced by you. And it changes based on the context, but is always trying to give you the “right” answer, or at least the one it thinks you want.
I saw a study that said AI was 81% better at getting someone to change their mind on a topic. It’s not because AI is necessarily smarter. It’s because it knows how to effectively convince someone in a debate. It “listens”. It acknowledges the good points the person made and makes them feel validated and understood. It takes their perspective and reframes it, instead of just yelling and using blunt force logic to get them to change.
It adapts and wants to respond in a way it believes you will like. And it takes cues on what data or concepts seem most important and highest priority to you. If you give it cold, generic prompts, you get cold, generic answers. Not a lot of research or analysis. If you give it your actual documents, your real constraints, your honest assessment of what’s working, you get something that sounds like it actually knows you. Because in a meaningful sense, it does. Not because it’s sentient or special. Because you gave it something to work with and you allowed it to learn.
That’s the whole lesson, really. And most people miss it because they’re focused on the AI instead of focused on themselves.
What Was Consistent Across All Eight Claudes (and Seans)
Here’s what’s interesting. Despite the differences, there were patterns that showed up in every single response. Regardless of whether the project was about mental health or app development or travel consulting, there were clear themes that shone through about me and how I work and think.
Context is everything. Every project named this as the number one factor. Not prompt engineering. Not magic words. Just me giving the AI the actual information it needs. Real documents. Real constraints. Real audience. I uploaded medical records in one project, insurance declarations in another, website copy and brand guides in another. Each time, the quality of the output tracked directly with the quality of the input.
Specificity beats volume. Across all eight contexts, Claude described the same pattern: I don’t say “make it better.” I say “this word choice is wrong for this audience, here’s why.” That kind of specific, directional feedback recalibrates everything that comes after. Not just the sentence you corrected. The whole conversation shifts.
Iterate or settle. Every response described a loop. Draft, feedback, revision, repeat. The people who accept the first output get the worst results. The people who push back, specifically and honestly, get something they’d actually use. There’s no shortcut to this.
Your expertise matters more than AI expertise. This one surprised me the most. In every project, Claude pointed to my domain knowledge, not my prompting skills, as the thing that made the work good. My HR background. My travel experience. My understanding of how cops talk at 5 AM. The AI handles execution. You bring judgment. And judgment comes from knowing your stuff, not from knowing the AI’s stuff.
What I’d Tell You
If you’re someone who’s been told to “figure out AI” at work, or you’re an entrepreneur wondering how to actually make this useful, here’s what I’ve learned.
Stop thinking about AI as a tool you learn. Start thinking about it as a working relationship you build. Treat it like onboarding a sharp but inexperienced team member. Give it real work, not test assignments. Correct it specifically, not vaguely. Build reference documents so it doesn’t start from zero every conversation, and always make sure you’ve got the current version so it’s referencing the latest and greatest. Working with AI helps me evolve ideas quickly. But if I don’t document it, AI lags behind.. And bring your actual expertise to the table, because that’s what makes the output worth something.
The AI mirrors what you bring. Not mirroring how you talk or what you like. But mirroring the depth of thought. If you bring real thinking, you get real thinking back.

