Why AI Thinks It’s a Know-It-All But Actually Knows Nothing

Published:

Over 500 million people every month turn to Gemini and ChatGPT to dish out wisdom on everything from cooking pasta to awkward relationship advice or even your kid’s homework. But let’s be real: if AI is suggesting you boil your pasta in petrol, maybe it’s time to rethink taking its advice on real time problems or solving algebra equations.

At the World Economic Forum in January, Sam Altman, CEO of OpenAI, reassured us with something that sounded promising: “I can’t read your mind and understand why you’re thinking what you’re thinking. But I can ask you to explain it, and decide if that reasoning makes sense to me.” He added, “I think our AI systems will eventually be able to do the same thing.” Basically, AI will give us reasons for its decisions, and we’ll get to decide if those reasons hold water or are just more computer-generated nonsense.

 

Knowledge Isn’t Just Spitting Facts

Look, Altman wants us to trust large language models (LLMs) like ChatGPT. He wants us to believe they can give transparent explanations for everything they say. But here’s the kicker: without a good reason, nothing is real knowledge. Think about when you truly feel like you know something. You’re confident, right? Maybe because you’ve got solid evidence, logical arguments, or because someone you trust told you it’s true.

The problem is, today’s AI can’t explain why it gives the answers it does, because, plot twist: it doesn’t actually “know” anything. LLMs aren’t designed to reason. They’re trained on oceans of text to predict what comes next in a sentence, based on the data they’ve seen. So when you ask something, they’re just continuing the language pattern in the most likely way. Sure, it sounds legit, but there’s zero actual reasoning happening under the hood.

Hicks, Humphries, and Slater nailed it in their article “ChatGPT is Bullshit.” They argue that LLMs are built to churn out sentences that look like they’re aiming for truth—without actually giving a damn whether they’re right.

 

When AI Gets Lucky and Sounds Right

So, if AI isn’t giving us real knowledge, what is it doing? Honestly, a lot of times, it’s just getting lucky. Take what philosophers call Gettier cases (thank you, Edmund Gettier). These are when you accidentally stumble upon the truth but without any justification. Like, you’re right for the wrong reasons. AI does this all the time.

Let me give you an example from way back in the 8th century, courtesy of Indian Buddhist philosopher Dharmottara: You’re trekking across the desert, and you think you see water. Spoiler: it’s a mirage. But when you get to the spot, bam, there’s actually water under a rock. Did you know there was water? Nope. You got lucky.

That’s AI in a nutshell. When it spits out something true, it’s like finding water where you saw a mirage. The data it was trained on might contain the evidence to back up its claims, but those justifications weren’t part of the AI’s thought process—because it has no thought process. Just like the mirage didn’t create the water, AI’s answer didn’t come from reasoning.

 

Sam Altman’s Sweet Little Lie

Here’s where Sam Altman’s reassurance falls flat. When you ask an AI to explain its reasoning, it’s not giving you a real justification. It’s serving you a Gettier justification—an answer that looks like reasoning but isn’t. In short, it’s pure bull. And we all know bull when we smell it.

As AI systems improve, their fake explanations are only going to get more convincing. Eventually, we’re going to have two groups of people: those who realize AI is just playing a high-stakes guessing game and those who don’t. The latter? They’ll be living in some Matrix-like reality, oblivious to the fact that they’ve been duped.

 

Do AI Tools Have to Change?

There’s nothing inherently wrong with LLMs doing their prediction magic. They’re insane tools, no doubt. And those in the know (programmers, professors, etc.) already use AI with that in mind. They know it’s not perfect, so they fact-check, adjust, and refine the outputs. AI becomes more of a draft generator, and the human experts mold it into something accurate.

But, here’s the thing: most people aren’t experts. Think about teens trying to learn algebra, or seniors seeking financial advice. They don’t have the experience to tweak AI’s output. They need reliable information, not mirages. And if LLMs are the gatekeepers to that kind of critical info, we have to know when we can trust them. Unfortunately, AI can’t tell us how it justifies anything because it doesn’t. And that’s a problem.

Final Thought: Taste Before You Trust

We can all agree olive oil is better than gasoline for pasta. But AI might just have you second-guessing even that. And if we’re not careful, we might end up swallowing a lot of dangerous recipes for reality, never even knowing we should’ve questioned the justification in the first place.

Related articles

spot_img

Recent articles

spot_img