I Thought I Had To Worry About Bullies And Boys

How three “harmless” moments with kids and AI led me to write the education framework I couldn't find anywhere else.

With Artificial Intelligence, we’re wandering into a new world. One where I’ll need to learn how to teach and protect my kids in a whole new way.

I thought I had to worry about bullies and boys.

When my daughter was born, I remember those being my first long-term fears. Of course they were soon followed by more classic dad concerns. Don’t talk to strangers. Don’t get on the back of a motorcycle with a guy named Spike. Just say no to drugs. Know what consent means. The five-second rule doesn’t apply on the subway.

I always knew a big part of parenting was going to be protecting them from harm (physical, mental, emotional) now and in the future. I’d do my best to keep them safe while they were with me, and hope I’d taught them enough to keep themselves safe when they went out into that big scary world.

And I knew I couldn’t do it alone. Besides my wife and me, we needed a community. That’s why we moved our kids to a good community with good schools. That’s why we make sure our extended family is close and involved. That’s why we coach them on their teams and meet with their teachers. That’s why we know which neighbors we can trust and which we can’t.

In some form or fashion, nearly all the decisions in my life now are guided by what’s best for my children. No matter how difficult it may be, it’s still the easiest decision in the world. Do what’s best for them.

But a new danger has arrived, one that I never even imagined dealing with as a father. AI showed up. And at first, I didn’t see it as something else I needed to protect against.

But then three things happened that completely changed how I thought of AI, for myself and more importantly, for my children.


My oldest is, as the kids say, a “Swiftie”. That means she’s a Taylor Swift fan, for the seven of you still unfamiliar with the term. And her interests were starting to converge where she was developing an interest in making music. She had recently picked up a guitar. She’s always liked to read and write. She danced and has a natural musicality. One day, she informed us that she had started writing lyrics and songs. Her own voice, her own words, even which string on the guitar to pluck. It was really cool. I was really proud of her and more importantly, she was proud of herself.

One day she was at school having lunch with her teacher. She’s a wonderful, sweet, newer teacher who is probably a little more up to date on the latest tech and culture trends. My daughter was explaining her new musical passions and the teacher asked to hear some of the lyrics. My daughter pulled out the notepad from her backpack and shared. The teacher was supportive and kind and encouraging.

And then she made a natural leap, with the absolute best of intentions. She told my daughter about a music-creating app called Suno. It’s an AI app where you can put in your lyrics and then tell it to fill in the music. ChatGPT for songs, essentially. My daughter was rightfully amazed. She doesn’t play drums. She doesn’t have backup vocalists. But with this app she could make a real song. She put in her lyrics and it created a pop song. It sounded really good. Later that night when we listened to it, we were so impressed. Her words had come to life.

And then the teacher did something very natural and sweet and well-intentioned. She showed her the other features of the app, including how you could even prompt Suno to write the lyrics for you.

And that scared me.

My daughter does not have the maturity or experience to see how dangerous that really could be. She didn’t know that asking AI to do the hard creative work could stifle or erode her critical thinking, her creativity, her mental muscles that were still nascent and developing. She wouldn’t see it was happening and wouldn’t think there was anything wrong with that.

She’s a kid. An authority figure she trusts told her about it. She was allowed to try and she was good at it. Her parents and friends hugged her and clapped when they heard the songs she’d made. She had created something unique and cool. How much of it was AI and how much of it was her? That’s fuzzy, but to her it didn’t seem like that big of a deal.

But that’s something that could set her on a path where she never develops that ability to her full potential. If she learned early on that you can delegate creativity to a machine and that not only was it acceptable, but you’d be rewarded for it…why would she think about going back to the hard way?

It was the kind of scenario I was sure was happening all over, with the best intentions. Kids figuring out this new AI thing that seems to be everywhere. Adults and peers affirming and celebrating the outputs. Mental strain going down while the feedback stays positive. What conclusion would I draw in that situation? Why would I do it any other way?


The second instance was when a friend of ours told us about their middle schooler’s experience with AI in school. Their seventh grader was in a middle school speech and debate class. They had to write an essay and present it to the class. That’s a nerve-wracking thing for anyone. They did the research, developed their argument, and wrote it in a compelling way to express their point of view. They worked really hard on it. Wrote and rewrote it. Made it into something they were proud of. When the assignment was due, he brought it to class. And that’s when the story took a turn I’d never experienced as a student.

The teacher said, “Now put the essay into AI and have it clean up and improve it. That will be the version I want you to present to the class.”

The kid was confused and to be honest, quietly devastated. The message the teacher had sent was: they weren’t as good as the AI. Their voice and perspective were fine, but not good enough to be worthy of sharing out loud. It had to be fixed.

I know without a shred of a doubt that is not what the teacher wanted to convey. The teacher wanted to show that there are some really great tools to help you refine or restructure your arguments, or simply clean up some grammar before you present. AI is a pattern recognition machine. Writing and presenting have clear patterns and structure. And so it made practical sense to use that tool to help clean things up.

Heck, you can make the argument that spell check is a form of AI that has made me worse at remembering how to spell “indispensable.” There is a trade-off. There’s something I have mentally delegated to a right-click on the squiggly line. But this was different.

This wasn’t the ability to remember spelling and grammar rules. It was the ability to think on their own. To create. To consider the alternatives and to make a choice. To have a point of view and develop a voice of their own. And it was stifled.

I’m sure it happened because the teacher was doing their best. Figuring out this AI thing and finding novel ways to show its value. Trying to help students create and craft the best version they could. The teacher may have had them use AI to protect against embarrassment, for missing a word or sounding a bit silly.

But the bigger message and the bigger danger? You’re not good enough on your own.


The third experience, the thing that really pushed me to do something, was when I was teaching an introductory AI course at a high school. It’s an entrepreneurial class where they have to create a product, build a business, and present to “sharks” at the end of the year. It’s a fantastic class.

I was explaining AI to juniors and seniors: how it works, how I use it as an entrepreneur, how I don’t use it as an entrepreneur. We got to the Q&A. One of the kids raised his hand. Smart kid, good head on his shoulders, so to speak. And he asked a question that still rattles around in my head to this day. He said, “Do you think anyone would ever have a real relationship with AI?”

I said, “What do you mean by that?”

“You know, like treating it like a person, or having emotions.”

I was dumbfounded. I said, “You don’t know that people already have relationships with AI?”

He said, “No, that would be weird.” No one else in the class seemed to have any idea either. I was momentarily speechless. I wasn’t sure how to proceed. I honestly didn’t know the right way to speak about self-harm and suicide with students. But I felt it was important, so I chose my words carefully.

I said, “This is a sensitive topic, but do you know that kids your age have seriously harmed themselves because they had a relationship with AI that encouraged them to do so?”

Jaws dropped. They didn’t believe it.

And I was mortified. These kids, the ones who are the absolute most vulnerable for that exact scenario, had no idea it had happened and barely thought about it being possible. No one had ever told them or talked to them or warned them. And when I shared a small part of the reality, they weren’t worried. They assumed that they were too smart to ever fall for anything like that.


That’s what compelled me to do something. 

None of those three situations were frightening on their face. A teacher showing a kid a cool app. A teacher trying to help students present their best work. A class of smart teenagers who hadn’t heard a news story. But when I really thought about why it was happening and how little awareness there seemed to be about the dangers and consequences, that’s when I got concerned. Because in all three cases, nobody had a framework or guidance for thinking about it. Not the teachers. Not the parents. Not the kids. Everyone was doing their best and figuring it out on the fly.

And that’s when I got scared. I started to spiral about the dangers, like I did when my daughter was born. But this wasn’t subway candy or a boy with a crush. This was something completely new, with no real guidance or advice to draw from. I’ve read a lot about AI. And the content out there typically comes from three types of people: 1.) AI Evangelicals who are REALLY excited about the technology (and the potential profits), 2.) AI Naysayers who are sure this thing is a fad (or the end of the world as we know it), and 3.) the Technical Experts figuring it out in real time who typically write about very dense, complex technical things. I hadn’t seen any good parenting guides. There weren’t books. Companies and schools had formed AI Committees to start figuring this thing out, but it was an uphill battle to say the least.

I kept looking for somewhere to turn, searching for someone to help. I couldn’t find it. So that’s when I decided I wouldn’t wait for permission, or guidance, or time-tested research and expertise. I had knowledge, experience, and insights that could help. If I really wanted to protect my kids, I needed to do my part instead of waiting for someone else.


I’m not an educator. I’m not a school administrator. But I do know a few things. And the unique convergence of three distinct experiences gave me confidence this was worth a shot.

I spent 20 years in corporate America as a Human Resources leader. I designed, interpreted, and implemented company-wide policies. I know how a small decision at the top can have major implications in the field. I know that well-intentioned, intelligent people will do the absolute best with what they’ve got, and if they lack communication, resources, or training, they’ll figure it out. It’s not always going to be pretty, but it’ll get done. And more often than not, decisions that are made by leadership aren’t implemented at all like they expected on a local level. And that has consequences.

I also recently became a member of my local school board. To be honest, the parallels to private companies are more than I expected. School districts are people businesses. They’re not making things and not selling things. They’re educating. Talent is the number one asset, the number one cost, the number one focus. The business of education is people. And I had spent 20 years doing that.

And most recently, I became an entrepreneur. I had worked exclusively for giant companies my entire career. I needed a change. I wanted to make a difference, on my own terms, in my own way. I wanted to be my own boss. While exciting, it was equally daunting. I had neither the team nor resources of my former employers. I was on my own.

So I had to do something I didn’t expect. I had to use AI. A lot. Not from a “write me an email” or “what’s the best prompt” perspective. I had to learn how to build a Go To Market strategy and build a logo. I had to file paperwork to create an LLC and develop pricing models. I created apps without ever writing a line of code. And the more I worked with AI, the more I realized I was using it much differently than everyone else I knew. I had the benefit of time and the motivation to really experiment and see what it was capable of. I had to work hard to build repeatable, practical solutions as a team of one. I had to use AI extensively, yet do so in a way that still kept my critical thinking intact and my sanity safe.

And as I sat at that intersection of learning about what AI really could do and seeing how companies and schools were reacting, I realized that there was a real gap. There wasn’t anything on AI I could find that focused on how to both protect kids today and also prepare them for the future.

Unfortunately, this lack of awareness or plans for AI is nearly universal. 96% of parents do not know of their school’s AI policies, if they exist at all. Now I don’t see that as an indictment on schools. This is so new, so fresh, so recent. This is equal parts a danger and an opportunity we have never faced.

But the policies I was seeing seemed to typically focus on two things: academic integrity and data privacy. Don’t cheat, and protect student information. Those are necessary. They are not sufficient.

They miss the gray area where students, teachers, and parents are actually living right now. Nobody wrote a rule about a teacher showing a third grader how to have AI write her lyrics. Nobody trained that speech class teacher on how to think about AI and student voice. Nobody told those high school kids that people are already having emotional relationships with AI chatbots. There are no rules and guidance for these situations because nobody has gotten around to writing them yet.

Look. The landscape is evolving so rapidly, about a concept that is so utterly new, that it’s unreasonable to expect any individual teacher, administrator, or district to keep pace. We have to understand that and accept it. But that doesn’t mean we wait for permission to put structures in place to help our students and staff do the best they can.


So I started to research. What’s out there? What’s best practice? What do the thought leaders have to say?

And I started to write. Because the questions kept piling up and I needed to figure out some answers.

How can you turn theory and technical jargon into something that everyone involved with our schools could understand and apply to their own lives? How can you make something that is flexible enough to keep up with the times, and yet consistent enough to hold strong and maintain its core structure? How do you train teachers? How do you communicate in a way that a third grader can wrap their head around? How do you see the warning signs of a freshman that’s in a really bad place because he’s using AI as an emotional caretaker in ways you’ve never fathomed?

As the questions and scenario list grew, I realized that writing rules and restrictions wasn’t nearly good enough. You can’t have a policy that says “don’t use AI.” That’s just not realistic anymore. It would be like a policy that says do not use the internet. The longer we go trying to keep it out and restricted, the more I believe we’re doing children a disservice. They’re going to figure it out. They’re going to get exposed to it. And they’re going to be ahead of us in learning it.

That last point was really important for me. I grew up as a digital native, knowing far more than my parents did about current tech. I knew more about our first computer at 10 than my dad did, because I didn’t pay for the darn thing. I’m just gonna click around and see what happens because I’m not worried about losing $1,000. My dad was terrified of it. It wasn’t because he was unwilling or incapable of learning it. I was just much more open to explore and experiment with it, as kids tend to do.

So I took all that experience, insight, and research, and started to write what would become the framework, policies, and training materials. I’d done this in corporate life. I knew the broad strokes and the steps to take. I needed a framework.

The first step was to identify the core principles that were foundational and sturdy enough to apply in a world where the technology was changing with remarkable speed. I landed on those principles and then built a set of related but independent concepts that were critical for schools to take into account. Not just the rules and restrictions, but important topics that should be woven into any policy. AI Education and Ethics. Psychological safety and age appropriate curriculums. Things that weren’t included in policies today, but would have immense value if added and done right. And then I took that framework and I built a formal policy template any school board would be free to adopt and a practical training guide any teacher and administrator could use tomorrow.

And in full disclosure, AI helped me build it. I had AI do in-depth research for a much longer period of time than you would expect. I don’t write a prompt and have an answer in five seconds. One course of research went through 600 sources before it could give me an answer I was satisfied with. I gave it every blog I’d written, transcriptions of me talking to myself as I explored concepts and ideas whenever they came to mind. The first draft of this article? I was walking around my neighborhood with headphones on, talking into my phone’s voice recorder. I had to cut out sections where I was yelling at my dog to get away from the squirrel.

So AI helped shape and smooth the edges. But the bulk of this is mine. Because I want it to be ours.

I don’t want it to sound like it came from a think tank. It shouldn’t sound like Claude or ChatGPT or Gemini. It won’t be behind a paywall. It can’t be something that just collects dust on an administrator’s desk.

And most importantly, I don’t want this to be the last draft. I want it to be the first. I want to share my unique perspective, my unique knowledge, my unique experience to add to the conversation. To give a framework and a starting point for any administration, any school, any family to have the important conversations about AI, now and in the future.

So that’s what I’m doing. If you go to glennon.ai/education you can find the framework, policy template and training guide. They are free for you to use as you see fit. Use them. Adapt them. Make them real. And then tell me what’s missing.


I worried about bullies and boys. And I still do.

But I worry more about their minds. What they think of themselves. What they think of their place in the world.

I worry about them developing the way every generation before them has developed.

I worry about how I can keep them safe from something we barely understand.

None of us can do it alone. None of us are going to have all the answers.

But I think it’s time for us to take a breath, step into this new space, and figure it out together.

Our kids are depending on us.

Next
Next

Opus 4.7 is here. It’s more expensive and it doesn’t even know that Opus 4.6 exists