Pascal’s Wager in the Age of AI
A portrait of Blaise Pascal on the left. On the right, stock image from Canva.com.
I remember having a conversation with a few friends in college about religion and God and all things concerning the universe and the meaning of human existence. Deep philosophical stuff. And yes, this was in a dorm room while imbibing in various illicit things. You know the vibe.
A friend of mine who had never struck me as particularly religious had a strong take on the value of faith. He said he thought Pascal’s Wager was a compelling argument.
I was surprised by his support of organized religion and I’d never heard of this Pascal guy or the bet he made. So I asked him to explain.
(Note: For a more thorough analysis beyond Wikipedia and my the drunken thoughts of buddy in the dorm room, check out this Medium post.)
My friend said that it’s not really a question of right and wrong as much as it is a way to make an educated decision based on potential outcomes. Basically, what’s the harm in believing in God? The worst case scenario (you being wrong) is that you have a little less fun in this mortal world. The best case scenario is eternal happiness and bliss. When comparing that to the inverse (not believing in God), the consequences flip dramatically. A lot of fun in this world against the possibility of eternal damnation in the fires of hell.
Which brings me to AI.
Ok, so looking back at that transition I realize it may have been a pretty aggressive hard cut. “Eternal damnation in the fires of hell. Which brings me to AI.” That’s a strong choice.
I’m not getting into topics of faith, or how the parallels between technology and religion are stronger than you’d expect. Not going there.
Instead, I want to look at the logic of Pascal’s Wager and explore how I think it applies to learning AI.
More specifically, I want to ask myself “Is it worth the effort to learn AI or not?”
To do that, I need to define the two broadest potential outcomes and apply two different scenarios to those outcomes.
To me, the potential outcomes for this exercise are either A.) AI is truly transformational or B.) AI is another technological fad that fizzles out or has some modest impacts on society. For the scenarios, it’s really just imaging two general types of people (as it comes to AI opinions): The Skeptic (“I’m not bothering to learn this AI stuff”) and The Optimist (“I really think AI is going to change the game, I better jump on board”).
I think a lot of people are getting distracted and don’t see this exact question. The binary views being presented in media (which are not representative of the actual number of views and potential outcomes) are either doomers/naysayers or evangelicals/believers.
They ask if AI is good or bad. If it’s quality or crap. If it’s safe or scary. These are all good questions. But it’s not the important one for my life and career, at least not right now.
The premise Pascal had was essentially one of risk vs reward. To me, learning AI is very similar. Let’s walk through the potential outcomes and scenarios, and why I think learning these AI tools is absolutely worth the effort.
Outcome A: AI is truly transformational.
Skeptic’s Scenario: I don’t believe it’s transformational. I choose NOT to learn it. I choose to think it’s a fad and I don’t waste my time and energy trying to keep up.
Skeptic’s Result: I’m left behind. The world embraces AI (and when I say “world”, I’m talking about businesses and leaders) and I do not have the requisite skills to do well in the workforce and maintain the income and lifestyle I currently enjoy.
Optimist’s Scenario: I believe it’s transformational. I choose to learn it. I choose to think it’s here to stay and I better get ahead and stay ahead of the curve.
Optimist’s Result: I’m more skilled and knowledgeable than most, and I can leverage AI to my advantage professionally and personally. I don’t get left behind. I don’t get replaced. I’m the one helping build the economy and jobs of the future, and I’m safe.
Outcome B: AI proves to be a fad and fizzles out.
Skeptic’s Scenario: I don’t believe it’s transformational. I choose NOT to learn it. I choose to think it’s a fad and I don’t waste my time and energy trying to keep up.
Skeptic’s Result: I’m fine. I didn’t spend energy and time and money learning AI and so I have more of it to spend on things I enjoy or find valuable. My life is not much different. My way of thinking and my skillset haven’t changed. I haven’t spent a lot of time learning, so my abilities are what they’ve always been.
Optimist’s Scenario: I believe it’s transformational. I choose to learn it. I choose to think it’s here to stay and I better get ahead and stay ahead of the curve.
Optimist’s Result: I spent more energy and time and money learning AI than my Skeptic peers. I didn’t get to spend it on other things I enjoy or find valuable. But I learned. And as a result, I built real skills that help me in ways far beyond just AI.
Needless to say, I fall closer to the Optimist’s view. And I can absolutely say, firsthand, that whether it’s Outcome A or Outcome B, I see tremendous value for the Optimist either way.
The more I work with AI, the more I see that I am treating it like a group of employees that work for me. Some are really smart and I can wind them up and let them go. Some need far more onboarding and coaching. Some make things way too complicated and it’s better if I just do the thing myself.
But I’ve enjoyed taking the time to really step back and think about what I want. If I have 100 “employees” at my disposal, what’s the most effective way to use them? Asking that question and figuring out the answer has nothing to do with AI skills. It’s management. It’s leadership. It’s organizational design.
If I am vibe coding an app with Replit, I’m not really learning how to write code. I’m learning to work with a product engineer and software developers, figuring out how exactly to articulate my vision, how to give feedback, and how to check their work. If you gave me a team of 10 people or one AI tool, I’m doing the same thing.
I’m trying to start a couple of businesses, but I’m not worried about some of the research necessary to build a strong foundation. I’ve got a researcher to do that. What I worry about is the quality of their results. I need to look at it thoroughly. Ask them to cite their sources and tell me their approach. I need to be the decision maker.
I also am the one that sets the tone on style and voice and taste. It doesn’t matter what AI thinks my writing style should be or my marketing voice. The AI works for me. It will do what I say. It will have great ideas and inputs and alternatives. But it starts with me. So what am I learning? What is my voice? I can’t complain that I don’t like how AI writes (although it’s getting better) if I can’t actually articulate what my voice, my style is. So it’s forcing me to define it. To explore it. That’s a reason I write. Because being frustrated with the outputs of AI has led me to do the hard thing and figure out how I write and communicate. I’ve written more in a month than I did in a decade (powerpoints and emails not included). That’s because of AI, in a weird way.
And I’m learning about all kinds of tools. Vibe coding apps. Voice to text apps. API Keys and image generation and all of that. But I’m not spending my time trying to become an expert at them. Instead, I’m being a creative and being a leader. If I had all these tools at my disposal, what would I do with them? It’s such a shift for me. I have to zoom out and focus on what I imagine and what I want it to look like. Then I need to be creative and find tools to get me there.
In both Outcome scenarios, I don’t see a significant downside to being the Optimist, and there’s real risk to being the Skeptic. This isn’t about your prediction if AI will transform or fade. This isn’t about if it’s “good” or “bad”. This isn’t about any of that.
Because even if AI fizzles out, I gained a tremendous amount of value out of the journey. I now have new skills. I have learned how to be more creative. How to build things and teams. I have developed leadership abilities and how to manage others to get the results I desire. I have better defined my voice, my point of view, my approach. I am a better thinker and leader.
To me, it’s a question of do you take this opportunity to learn and improve your core skills as a thinker and leader? Because regardless of outcome, learning AI will help develop skills that lead to success in the future. Those skills transfer. Those skills are universal.
Stop worrying about which outcome is going to prove true. Instead, focus on which approach gives you the best odds of success in the future, regardless of the outcome.
Pascal said that it was worth it to embrace the thing that had much higher upside and much less risk. What do you have to lose? For me, AI has similar themes. If I lean in and learn, I am either ahead of the pack if AI is here to stay, and I’m more skilled if it fades away. That seems like a pretty easy choice for me. What about you?

