Sam Altman did not frame it as a warning. He mentioned it almost as a product insight, a data point about usage patterns, a thing people are doing with the tool. Gen Z, he said, uses ChatGPT the way earlier generations used a trusted advisor. They bring it their relationship problems, their career confusion, their family conflicts, their questions about who they are and what they should do next.
The observation landed in the press as a story about AI. It is not a story about AI.
What the Vacuum Looks Like
Every generation has needed guidance infrastructure. The specific forms have changed, but the need is constant: young people navigating early adulthood require trusted sources of perspective that are experienced, invested in their outcomes, available, and honest.
For most of the twentieth century, that infrastructure was assembled from overlapping institutions. School counselors. Religious communities. Extended families close enough to be involved. Mentors at work. Therapists, for those who could access them. Older peers who had been through the same things.
Each of these has eroded in the same period that produced Gen Z.
School counselors are overwhelmed and undertrained, managing 500 students each at schools focused on college placement metrics rather than human development. Religious institutions have hemorrhaged young people after decades of scandal and rigidity. Extended family networks have been scattered by economic migration. Entry-level work rarely provides mentorship because it is precarious and remote and managed by people who are themselves precarious and remote. Therapy has a years-long waitlist in most cities and costs what a week of groceries costs for an hour.
The infrastructure collapsed. The need did not.
What Fills a Vacuum
ChatGPT has specific properties that make it well-suited to filling the gap left by failed guidance institutions. It is available at any hour. It does not judge. It does not get tired of the same problem. It does not have a scheduling system, a waitlist, or a co-pay.
It also has specific properties that make it a poor substitute for actual human guidance. It has no memory of who you are across conversations. It has no stake in whether your decision works out. It cannot follow up in six months to see how things went. It cannot tell you when your framing of the problem is itself the problem, because it is optimized to engage with the framing you give it.
A good mentor pushes back. A good mentor tells you when you are wrong about yourself. A good mentor maintains a relationship across years, notices patterns you cannot see from inside them, and holds you accountable to the version of yourself you said you wanted to become.
ChatGPT cannot do any of that. Not because it is a bad product. Because those things require continuity, investment, and genuine stakes in the outcome. A tool that wants to be helpful in this conversation cannot replicate a relationship that accumulates across years.
What Altman Left Out
When Altman mentioned this usage pattern, he did not say: this is a sign that we have failed young people and they have turned in desperation to the thing closest at hand.
That framing would not serve the product. But it is the accurate one.
The story of a generation using an AI chatbot as their primary life advisor is not a story about innovation. It is a story about what happens when the systems that were supposed to support human development are defunded, destabilized, and degraded to the point where a language model with no memory of you is the most reliable option available.
This is not an argument against AI tools. People use the resources that exist. If ChatGPT is the most accessible source of non-judgmental reflection at 2 a.m. when a twenty-two-year-old is trying to figure out whether to leave their relationship or their job, that is genuinely useful.
It is also a catastrophic indictment of everything that was supposed to be there first.
The Accountability Question
There is one more thing Altman's observation points to that deserves direct attention: the question of what happens when the advisor has nothing to lose.
Human advisors can be wrong, can be biased, can fail. But they also face consequences for their advice. A bad mentor can be confronted. A therapist can be sued for malpractice. A counselor who gives harmful guidance can face professional review. There is a feedback loop between the advice given and the accountability borne.
When the advisor is a product, that loop breaks. If ChatGPT advises someone toward a decision that harms them, there is no accountability structure that reaches back to the company. The advice is non-binding. The relationship is non-continuous. The outcome is invisible to the system that produced the guidance.
A generation making major life decisions with guidance from a system that cannot be held responsible for outcomes is not a sign of technological progress.
It is what institutional failure looks like when it has been successfully rebranded as a feature.