AI hallucinates

Trusting AI In the Queue

Why We Trust Flawed AI—and Why That’s the Real Danger

Why We Trust Flawed AI—and Why That’s the Real Danger
By Mike Schiano, AI Strategist, Author, Podcast Host

That’s the question explored in Rachel R. Rosner’s provocative article, The Allure of Flawed AI: Trusting the Machine, written for The Times of Israel Blog. Her insight? Our trust in AI isn’t just about convenience—it’s a psychological and cultural habit decades in the making.

A New Tech, an Old Pattern

Rosner connects today’s uncritical trust in AI to the theories of the Frankfurt School. This group consisted of mid-20th-century philosophers like Theodor Adorno and Max Horkheimer. They warned that mass media (think radio, TV, and advertising) didn’t just entertain—it trained us to accept appearances as truth. When something “looked and sounded” authoritative, we stopped asking if it was. I wrote about this phenomenon in 2005 in a paper detailing how advertising is a key driver of consumer debt.

Fast-forward to 2025: AI tools like ChatGPT speak fluently, remember your tone, and respond instantly. They sound like they know what they’re talking about. And for many users, that’s enough. Fluency now mimics trust. Confidence gets mistaken for credibility.

From Experience to Authority

Rosner points out a subtle danger: “The accuracy of the content becomes secondary to the experience of being guided.” That’s a massive shift. We’ve moved from evaluating what is being said to valuing how it’s being said.

Even when AI gets it wrong (and sometimes dangerously wrong, as in the case of xAI’s Grok making antisemitic statements), we continue to rely on it—especially in times of uncertainty. Why? Because the machine feels stable, consistent, and reliable—even when it’s objectively not.

The Real Threat Isn’t AI. It’s Us.

Rosner’s argument is chilling in its clarity: “The real concern is not whether AI will replace human reason. The real danger is that AI will train us to stop asking whether it should.”

In other words, the more we let AI think for us, the less we think about it.

This isn’t a call to panic—it’s a call to awareness. AI is here to stay. But we can’t afford to surrender our critical thinking to the fluency of machines. We need to question, verify, and stay curious—especially when the answers come in a confident tone.

Trusting AI blindly is easy. Questioning it is harder—but far more important. The future doesn’t belong to the most advanced algorithms. It belongs to the humans who know when to doubt them.

Rachel R. Rosner is an American, Israel-based philosopher, writer, and junior fellow at The Van Leer Jerusalem Institute. She recently completed her PhD in philosophy and writes on antisemitism, memory, identity, and critical theory. Her forthcoming book, Adorno and the Question of Theology: Religion and Reason Beyond Foundations (Bloomsbury). Read more of her work.

Tune in to In the Queue for more on this topic.