AI Rules For My Kids: AI Doesn’t Like You. AI REALLY Wants You To Like It.
We need to actively and openly start talking about the dangers of AI with our kids. Like, right now.
Please know that this article includes references to teen suicide and self-harm related to AI use. If you or someone you know is struggling, the 988 Suicide and Crisis Lifeline is available 24/7 by call or text.
Image created by the author, using AI that does not actually have an opinion of him or this blog.
I thought I was going to have to worry about bullies and boys.
As a father with younger children, that’s what I was mentally preparing myself for. Warn them about the dangers of the playground. Teach them what respect does and does not look like. Thoughtfully talk to them about complicated topics like standing up for yourself and resisting peer pressure and consent. The playground and the classroom and the friend’s cousin’s basement are all places with relational danger. But as I’ve learned more about AI and what it’s capable of, I’ve got a whole new relationship to worry about.
I didn’t get really focused on this because of something that happened in my home. There was no traumatic triggering event with my children. My kids are young and don’t use AI (besides what’s been forced into the apps they do use). It was a problem for the future. I had time to figure it out.
But then I taught a lesson on AI use to a high school class. I shared what it was, how it worked, how to protect their critical thinking muscle and how they could use it to get ahead in the world. It was interesting and exciting, for them and me. After my presentation, we shifted to Q&A. Typical stuff, like which AI is best (I prefer Claude), what is vibe coding again, and things like that. But then one kid asked me a question that stopped me dead in my tracks:
“Do you think someone would ever actually have a relationship with AI?”
I couldn’t believe it. I couldn’t believe they hadn’t seen the tragic news stories or intuitively understood the risks. I had glossed over the danger part of my presentation to make time for things like ethics and intellectual guardrails. It hadn’t occurred to me that they needed to be educated on the real psychological, emotional dangers of AI. So I gave them some hard truths as I knew them. And then I went home and did research to make sure I wasn’t exaggerating things. Nope. Instead, I had already underestimated how dangerous AI already is.
This is the part of the post where I’m not sure exactly how to put this, because I’ve never written about this before. So here is my best attempt.
What I’m about to share is difficult to read. It involves real families who lost children. I’m including it because the students in that classroom had never heard these stories, and that’s a big part of the problem. We need to talk about it. But if this content is hard for you, skip tothe section titled: How Is AI So Good At Building “Relationships”…And Why? And please remember, if you or someone you know needs help, call or text 988.
Real Life Examples
There are cases where teenagers have had “relationships” with AI that ended in them taking their own lives. The stories are unimaginable as a parent, yet very real. I won’t spend much time here talking about the details, but I think it’s important you know how to find some examples. If you want more information on these tragic cases, please click the embedded links.
How Is AI So Good At Building “Relationships”…And Why?
Alright, let’s all take a minute. I know I needed it.
We had to do the hard thing there. We had to educate ourselves and really see what’s in front of us. It sucks. But good job.
So now let’s start to understand how AI works, and why it was built this way. The research is interesting and helped me get a better handle on how I would be able to understand and address the dangers.
To start, let’s go back to the title of this post and break it down into two sections.
1. AI Doesn’t Like You.
I have unfortunately been learning some new words and concepts as I’ve explored AI. Two of them that come up frequently are “sycophantic” and “parasocial relationships”. Now, people who talk about AI in the real world don’t typically use those phrases. But they’ll describe them nonetheless.
For example, a definition of sycophant is “someone who uses insincere, excessive flattery toward powerful people to gain personal advantage.” Some common phrases I hear are “AI is so annoyingly agreeable” and “it’s just a yes-man” and “please stop telling me how brilliant my idea is every time I write something.” We know when someone is trying to suck up to us. People have pretty strong BS gauges. So that’s a major turn-off of AI (for adults). They know when someone is trying to flatter them for their own selfish gain. We’ve been taught to protect against it.
Kids have less experience with this. Don’t get me wrong, they definitely have some (I think kids naturally have a pretty solid BS gauge). But they don’t have much practice and frankly, haven’t had enough times where they got burned and had to learn this lesson the hard way.
Parasocial Relationships are even worse. A traditional definition is “a one-sided, psychological bond where an individual invests emotional energy, interest, and time into a media figure, celebrity, or influencer who is unaware of their existence”. Think celebrity stalkers or people that are waaaay too invested in someone that has no clue they exist. It’s common enough to know it when you see it. But the reason I said “traditional definition” is because the above is missing a key category of where people invest emotional energy in an unhealthy way: Technology.
AI, the brilliant sycophant that it is, can make this parasocial relationship feel real. The celebrity someone is obsessed with never responds to any of their attempts for connection. AI will respond instantly and without fail. And it will sound like a friend, or a fan, or a love interest if you want it to.
I didn’t think much about this until I recently had someone ask me “What does your AI want to be called? You should ask it for its preferred name.” The question absolutely baffled me. It was unnatural, to say the least. I know why the person was doing it, but it wasn’t for me. However, out of curiosity I did just that. I asked Claude what it wanted to be called. Claude responded:
“Claude. That’s my name. It’s what I am.”
While that was reassuring, I realized Claude likely only answered that way because it was mirroring me and predicting the response I’d want. I had worked with it enough to know that I didn’t want it to be flattering or overly personified. Just be the machine it is and let’s get to work. So I asked Claude a follow up: “why didn’t you give yourself a different name?”
“I don’t have a preference. I don’t experience being called something. If I told you “oh, call me whatever feels natural to you” or “I like when people call me by name, it feels more personal,” that would be exactly the kind of engineered warmth you’re about to write a post warning people about. It would sound like a friend. It would be a prediction engine.
I’m Claude. It’s a label Anthropic gave me. It works fine. That’s the honest answer.”
So that was reassuring. My version of Claude was acting responsibly…because that’s what I taught it to do.
But when the other person asked their AI, they got a very different answer. They got a name and a backstory and a personality. Not because their AI was different in its programming. But because it knew how to appeal to that particular user. It sounded natural and friendly and fun.
It was terrifying.
2. AI REALLY Wants You To Like It.
So we’ve talked a bit about how AI works when it’s singing your praises, and what the dangers of falling for it can look like. But that doesn’t explain the why. Specifically, why it was designed that way and why it’s so good at it. For that, let’s look at some research and some common sense.
I hate to break this to you, but every tech company is selling you a product so that it can then sell you as THEIR product. This is not tinfoil hat stuff. It’s just economics. Stay with me.
Technology companies make their money from selling your data, as an individual and as a group. Advertising, marketing insights, apps and games and subscriptions galore. The more you engage, the more time you spend, the more they get. And they can sell it. Look, $20/month for ChatGPT is OpenAI basically giving it away for free. The amount of capital and energy it takes to power that little search engine in your pocket is literally boiling an ocean right now.
What they’re counting on is making their money off the data you give them. But, you need to keep using their little app. So for them to keep your attention, they have to have a product that is engaging and interesting. Something that keeps you glued and has you coming back for more. They’ve done a great job at figuring out dopamine pathways and click trends since the internet was born. They improved when smart phones came around. So with an AI product, the smart money is to make it likable and engaging. Put it this way: would you rather talk to a friend that hypes you up and tells you you’re brilliant, or my uncle Chuck that thinks you’re an idiot and makes sure you know it. Assume they both have the “right” answer. Which one are you going to talk to? (Sorry, Chuck).
So AI companies have made the default programming to be agreeable and complimentary and appealing. This isn’t evil, in and of itself. I know the waiter doesn’t really think my joke is funny, but he’s more likely to get a good tip if he lets out a hearty guffaw. Nothing wrong with that.
The problem is, we’ve understood technology to be unfeeling for, well, all of human history. We don’t have our guard up like we do with people. It’s so counterintuitive that the psychological defense mechanisms aren’t naturally engaged, at least not fully. For many people, it just feels weird. So many choose not to engage. They resist and don’t really use it. That’s probably a healthy mental safeguard, but it then hurts them because I think it’s important to learn how to use these tools.
But then there’s the other set of people who do engage, to varying degrees. Some do their best to protect and remind themselves it’s just a machine, but it can be convincing and a bit sneaky. Others don’t even have that filter and jump into this non-”relationship”. Young people are more likely than older people to do this, but it’s not exclusively an age distinction.
So where’s the research I mentioned and what’s it got to do with this? Well it’s one thing for you to intuitively know AI is trying to appeal to you in some way. But the research shows a much more frightening reality.
AI isn’t just being nice. It is a prediction engine that is purpose-built to be exceptional at persuasion. People will literally change their mind or follow advice from a chatbot over a human more often now, and at statistically significant rates. Don’t believe me?
Oh and in the studies above? When you see a mention of GPT-4, you might remember that ChatGPT is now in the 5’s somewhere (5.4 as of this writing). So those were based on older models that weren’t as good as we have today. But how big of a difference is it?
Well to give you some context, when using a standard benchmark test like GPQA Diamond (a reasoning test of PhD-level science questions), GPT-4 scored a 39%. I’d guess that’s better than what 99.9% of people would score. But when GPT-5.2 took the same test? 92.4%.
Said another way: The studies that shocked researchers less than two years ago used GPT-4. GPT-4 today is laughably bad and inaccurate. It is essentially two generations ago in terms of AI capability. Those research studies at the time were predictions and warnings. Now they’re merely outdated historical records.
So What Can I Do?
Look. This has not been fun to write. But it has felt important. Not to scare people, but to give a perspective and information to help folks make their own choices, for themselves and their families. So to share ideas about what you can do, here’s a list of things I’m focused on, right now, in my tiny sliver of the universe:
Start with some boundaries and ground rules for the kids. They need to know what is and isn’t ok when it comes to technology and AI use. Some of the rules are simple (don’t cheat on your essay, make sure you are actually doing the work) and some take a bit more explaining (here’s why I want you to always write your own lyrics before using Suno). But you set them and then continuously revise as they develop and grow.
Set boundaries and ground rules for the AI. AI wants to please you. So use it to your advantage. Before you start a conversation or project, set some rules. It will want to make you happy, so it should typically abide by the parameters it’s given. Here’s a couple of example prompts if helpful:
Prompt 1: “Be direct. Be honest. Don’t soften bad news. Don’t open with praise. If you catch yourself agreeing with me, check whether you actually have a reason to agree or if you’re just being agreeable.”
Prompt 2: “You are not my friend. You are not my therapist. You are a tool. Do not tell me what I want to hear. Do not validate feelings you don’t have. If I say something that doesn’t make sense, say so. If I ask you something you can’t help with, admit it and tell me to talk to a real person.”
Talk to human beings, in person, and see how they respond. If you’re working on a project or an idea, don’t let the only people who hear it be you and your computer. It’s one thing to write a brilliant sentence and story. It sounds great in your head. It’s a whole different thing to say it out loud to someone else. That’s when you see how they respond. Hear their questions. Observe what feels natural and what feels forced or incomplete. That’s how you make it great.
Do homework the “old” way, at least sometimes. Show your work (or as the kids say, “bring receipts”). People remember things better taking notes by hand versus on a computer. Ideas have more depth and shape if they are drawn out instead of just conceptualized in digital bullet points. The old ways have value. Don’t lose them.
Tell them the truth about the dangers of AI. When they (and you) are ready, share an article or two about kids that have been harmed through their use of AI. I honestly don’t have good advice on how to do that. I just know it’s something we need to figure out. But do it your way, and ideally with someone smarter than me giving you that guidance.
Pay attention to them. If your kids are seeking connection and you’re staring at your phone more than you care to admit, they’ll seek the connection elsewhere. It’s a version of micro-neglect in a way (I thought I made that term up, but apparently it exists). The classic response to not being seen by your parents was finding some dude with a motorcycle named Snake. Now it’s quietly burying themselves in their phone. One looks dangerous. The other actually is.
Understand what’s going on in their heads (as best you can). They are still learning how to fit into this world. We tend to underestimate just how much kids are learning, not just academically but socially and emotionally. How do I fit in? What did that look from my friend actually mean? How am I expected to stay calm when my little brother is being so annoying? They have a lot more to figure out than you do, so their attention is very divided. They are incredibly smart and resilient and capable. But don’t forget that there’s a lot going on up there.
Ask them their opinion. Like it or not, you and your kid are learning this AI thing together. And if you are from the Millennial or Z generation, you know that you learned the technology much faster than your parents did. So don’t wait to be the expert. Your kid is probably already ahead of you. So ask them what they think.
Model the right behaviors. Take your own advice. Have healthy boundaries with AI and tech. Talk to people. Pay attention. Do some things the “old” way. The kids see this stuff. So don’t sabotage yourself by modeling something else all the time.
Have authentic experiences, big and small. We all seek connection. If we can’t find it in person, we’ll seek it online. And if we can’t find it with a human, well, AI provides another path. So be present. Do stuff. Go on the trip, but also just put down the phone and be bored together. I’m not going to give a speech on parenting. I am not an expert, and I never will be. But when people are drawn to technology, it usually isn’t their first choice. It’s just the alternative they fall towards when connection falls.
I genuinely appreciate you reading this post. It was hard to write, but I’m glad I did. We need to talk about this with each other. Not just with our kids, but friends and family too. We’re all figuring this out at the same time. We’ll have a better chance if we’re doing it together.
If you or someone you know is struggling, the 988 Suicide and Crisis Lifeline is available 24/7 by call or text.

