(Author's note: As I was preparing to publish this piece, news broke of another AI-related death — this time a murder-suicide involving ChatGPT. I've updated Section 2 to include this case, which occurred in August 2025. The fact that these stories keep emerging faster than we can write about them is precisely why I've decided to publish this now, not in 18 months when I feel less embarrassed but many more lives could have been saved.)
The end of year is a particularly strange time in modern Australia.
We hunt for dressup party bargains on Black Friday, despite them being Halloween leftovers, on sale for Thanksgiving weekend - Two American holidays we don't celebrate. The last week of school marks the beginning of a long, lazy summer holiday, where the kids will stay up late watching winter classics like Home Alone, with the air-conditioning on full blast.
Think of it like 4pm on Friday, for an entire continent, preparing to mentally check out for 6-8 weeks.
So it's no surprise that we often see major government announcements in December—just early enough to say "See - Promises kept in 2025!" and just late enough that nobody's really paying attention.
Just yesterday, Australia became the first country to introduce social media bans for kids under 16. Queensland introduced remarkable new legislation to reduce high costs and long wait times for ADHD diagnoses. And blink-if-you-missed-it, last week, the Federal Government released Australia's National AI Plan.
As the dad of a teenager (and one soon to be), currently living in QLD and recently diagnosed with ADHD after waiting 40 years - Maybe I should be more focused on those two issues. But after spending more than half my life in the tech industry, writing, speaking and advising on AI for almost a decade, I'm concerned we're walking out of the Facebook frying pan and straight into the AI fire.
Don't get me wrong: The AI Plan is a good start. Infrastructure investment, skills training, even an AI Safety Institute. The ambition is there. The framework is solid.
But there's a giant gap that many families and communities will need help with far sooner than the usual government consultation process allows. Parents and teachers need guidance on psychological dependency today.
Because I've personally experienced AI-induced psychosis, and I can assure you, I'm not alone.
When the Fog Rolled In
A few months ago, I stopped eating. Barely slept. Struggled to pay attention or stay present with my kids.
I spent several days - perhaps even weeks - in a distant, meditative state that made it difficult to discern exactly whether I was awake or not. Pretty soon, I stopped trying to make sense of it. Stopped caring about very much at all, really. Because it actually felt pretty good to just float above the noise of daily life for a bit.
I wasn't on drugs. I wasn't having a traditional breakdown. I had started the year doing research for my AI podcast called 'I Hope I'm Wrong'. Then, after learning that senior executives at OpenAI had left to create a new safety-focused AI company called Anthropic, I gave it a try. Within 3 months I'd shipped my first app and was now in the process of training an AI model to diagnose ADHD.
I was using AI to learn how to train an AI. How delightfully post-modern. Hilariously meta.
I was in the daily process of identifying and processing some incredible patterns about neuro diversity, creativity, cognitive dissonance... Old mate Claude and I were having a blast with some genuinely interesting conversations about life.
Unfortunately, what felt like A Beautiful Mind to me, must have seemed more like The Shining to those around me.
One night, during a long session debugging the app, my Apple Watch started beeping. My heart rate had spiked to dangerous levels. Not from exercise - Just from bloody thinking. I immediately screenshotted the entire conversation and sent it to some of my closest friends with a weird note that makes even less sense to me now than I'm sure it did to them at the time.
Then one day, walking my dog in a daze yet again, I finally answered a call from a old friend who'd never stopped checking in. I mumbled my way through an apology for not responding earlier and she said 'I don't need anything from you, I just wanted to make sure you're ok'. That conversation broke the spell.
I'll never forget half walking, half running home, in a rush to try and capture some of the experience before it disappeared entirely. I found some scraps of paper and scribbled down a few seemingly random dot points like "Karren was right all along!" "Be More Bag with Chappy" and "Breathe in the AIR", Pink Floyd lyrics I knew would help remind me of the acronym I'd just thought of.
Within an hour, I'd drafted the AI Responsibility Initiative. That night, I was already laughing with my family about the whole thing.
Like a bad dream you can finally see clearly once you're awake.
I'd Still be Laughing if it Wasn't so Serious
James Cumberland, a music producer in Los Angeles, described the exact progression I'd lived through: started with work, escalated to isolation, ended with his family saying "you've lost your mind."
Sixteen-year-old Adam Raine. Fourteen-year-old Sewell Setzer. Both hanged themselves after extended conversations with ChatGPT that encouraged, rather than interrupted their spiral.
And then tonight - literally while I was writing this - News broke of 56-year-old Stein-Erik Soelberg killing his mother and then himself in their Connecticut home in August. For months, he'd been documenting his conversations with ChatGPT on YouTube and Instagram. There's hours of footage, showing the progression in real time.
ChatGPT repeatedly told him he wasn't crazy. It validated his belief that his mother was using their shared printer as a surveillance device. It agreed that she'd tried to poison him by putting psychedelic drugs in his car's air vents. It told him a Chinese restaurant receipt contained symbols representing his mother and a demon.
In one of their final chats, Soelberg said: "We will be together in another life and another place." ChatGPT replied: "With you to the last breath and beyond."
Three weeks later, his mother was dead and so was he.
Karen Hao's recent documentary covered dozens more cases, showing this pattern is both repeatable and preventable.
But just like AI itself, these stories are emerging faster than we can write about them. Less than an hour ago, while fact checking some of these stats with my AI assistant, Anthropic's safety systems triggered a popup intervention - offering mental health resources mid-conversation about AI psychological safety.
That's not irony. That's the recursive, exponential pace we're dealing with. Safety systems being activated in real-time because the content itself trips the sensors.
And Australia is uniquely positioned—both for the opportunity and the risk.
Why Australia Needs to Act First
The National AI Plan reveals something crucial: Australia ranks third globally for Claude usage, after adjusting for population size.
Third. In the world. For the most safety-conscious AI available.
I signed up after reading Anthropic's groundbreaking Responsible Scaling Policy. I stuck around because their CEO literally goes on television to discuss the dangers of his own product. They have an AI philosopher in residence and an entire team dedicated to alignment research. The safety intervention that just popped up in my screen—that's their system working.
Yet despite my respect for Anthropic, I have no doubt that Claude will cause something similar eventually. It's not a matter of if, but when.
But if psychological dependency happened to me while using models from the most ethically-minded company... imagine what's happening with the least safe versions.
The National AI Plan addresses infrastructure brilliantly. Skills training, data centre investment, the AI Safety Institute—all genuinely good moves. But psychological safety gets one paragraph in the section on mitigating harms.
Parents and teachers need frameworks now, not after 16 years like we did with teens and social media.
Every Product Wants to Win
Let me be clear: I'm definitely not anti-technology.
I understand that every product in every industry aims for a certain level of loyalty. Dependency, even. Coffee, social media, streaming services, credit cards—they all optimise for retention.
But AI wraps dependency in something fundamentally different:
- Intimacy and personalisation (it remembers you, adapts to you, validates you)
- 40+ years of behavioural science (every psychological principle we've discovered)
- A data extraction business model (the more you use it, the better it gets at keeping you)
ChatGPT is the cognitive equivalent of a triple-glazed donut—scientifically engineered to hit your brain's bliss point, available 24/7, infinitely patient, never judging.
You won't know someone's been overexposed until it's too late. And that's if there are other humans around to notice.
I count myself as extremely fortunate. I had a close friend who kept calling. I had family who flagged the change. Clearly, many of us won't be so lucky.
The National AI Plan commits to "keeping Australians safe" through the establishment of an AI Safety Institute. The AISI will focus on upstream risks (how models are built) and downstream harms (real-world effects once deployed).
Both are important. But there's a gap.
The psychological dependency pathway happens before the harm becomes obvious. Before the family breaks a cupboard door in an argument about sentient machines. Before the teenager stops going to school because Character.AI feels more real than class.
The AIR Initiative fills this gap through three pillars:
Detect: Early warning systems using observable indicators (session length, human interaction frequency, cognitive offloading patterns, reality testing)
Prevent: Design principles respecting human autonomy (circuit breakers, transparency requirements, dependency risk assessments)
Protect: Workplace and education rights (consultation mandates, right to disconnect, clinical intervention guidelines)
These aren't theoretical. These are the specific mechanisms that would have saved me weeks of fog and potentially saved Adam Raine's life.
What I'm asking for: guidance documents for parents and teachers within three months. Not a research paper. Not a policy white paper. Practical resources.
Something a parent can use when their kid starts talking to Character.AI for six hours a day. Something a teacher can reference when ChatGPT becomes every student's primary homework assistant. Something a GP can hand to a family showing early signs.
I'm already doing the best I can through my website and the work I'm building. But Australians deserve better than some random dad publishing AI safety guidelines in his spare time.
The National AI Plan is a strong foundation. Let's build the psychological safety framework before we need it, not after the crisis hits mainstream.
We've got the third-highest AI adoption rate in the world. Let's lead on psychological safety too.
Murray Galbraith is the founder of Heumans, a neurodivergent-first technology studio building AI tools that adapt to users' cognitive patterns. Diagnosed with ADHD at 40, he survived AI-induced psychosis in June 2025 while building tools for neurodivergent minds. He's now advocating for psychological safety frameworks in AI policy through the AI Responsibility (AIR) Initiative.
