Science & Technology
← Home
We Finally Have a Way to Kill Bad Consciousness Theories (And It's About Time)

We Finally Have a Way to Kill Bad Consciousness Theories (And It's About Time)

2026-04-29T02:03:46.155566+00:00

The Consciousness Theory Mess We've Been Ignoring

Let me paint you a picture: Scientists right now have over 325 different theories explaining what consciousness actually is. Three. Hundred. Twenty. Five. That's not a field of science—that's a field that's completely lost the plot.

Think about that for a second. In physics, we have maybe a handful of major competing frameworks. In biology, we've settled on evolution as the organizing principle. But consciousness? It's basically a free-for-all where literally anyone can propose an explanation and call themselves a researcher.

The problem isn't that we're thinking hard about consciousness. The problem is that we're thinking in a thousand different directions at once with no referee, no scoreboard, and no way to actually determine who's right.

Why This Matters (And Why It's Frustrating)

Erik Hoel, a neuroscientist at Bicameral Labs, is genuinely frustrated by this situation—and honestly, he has every right to be. He describes the current state of consciousness research as a "pre-paradigmatic" era, which is academic speak for "we haven't figured out the basic ground rules yet."

Here's what bugs him the most: it's easy to invent theories, nearly impossible to disprove them.

A new AI chatbot can generate consciousness theories faster than you can read them. But once someone invents a theory, there's no mechanism to test whether it's actually correct. So what happens? Researchers end up promoting their favorite ideas like they're arguing about pizza toppings, not building knowledge.

The "Theory-Killing Machine" Explained (It's Simpler Than It Sounds)

Hoel's solution is brilliant precisely because it's straightforward. He's building what he calls a "consciousness-theory-killing machine"—though honestly, it's more of a conceptual framework than an actual machine (sorry to disappoint anyone imagining futuristic lab equipment).

Here's how it works:

The Core Idea: Substitution Arguments

Imagine two systems. System A processes information and reports "I see green." System B does the exact same thing—same input, same output, same behavior. The only difference? System B has completely different internal wiring.

Now here's the question Hoel loves asking: If a theory says System A is conscious but System B isn't, why? What's the difference if they behave identically?

This isn't philosophy—it's a stress test. Any theory that can't answer that question without contradicting itself... well, that theory just failed the crash test.

Testing at Scale Using AI as a Crash Dummy

This is where it gets really interesting. Hoel wants to run these tests across biological brains, animal brains, neural networks, and AI systems. But here's the clever part: he's not using AI to prove machines are conscious. He's using them as test subjects because they're infinitely flexible.

You can't rewire a human brain to test a theory. But you can completely restructure an AI system. You can add feedback loops, remove them, flatten the architecture, make it biologically weird—whatever you need to test whether the theory still makes sense.

If Theory X says a system is conscious when it has certain features, but then it changes its answer when you rearrange those same features in a different configuration, Hoel sees a red flag. That's a contradiction. That theory just got eliminated.

The Brutal Beauty of This Approach

What I love about this method is how ruthless it is. Theories don't get to hide behind metaphors or philosophical hand-waving. They have to make predictions. They have to survive tests. They have to actually risk failure.

Hoel calls this "logical judo"—he's building mathematically precise scenarios, exposing logical contradictions, and then using those contradictions to eliminate theories that don't hold up. It's like a game of chess where nonsensical moves simply aren't allowed.

The goal isn't to crown a winner tomorrow. It's to thin out the 325 theories down to the ones that actually survive scrutiny. Fewer flowers, but stronger ones.

There's Still a Catch

Now, here's the honest part: Even Hoel admits this framework might not solve what philosophers call the "hard problem" of consciousness—the question of why internal experience feels like something at all. Why is seeing red different from processing data labeled "red"?

His machine can help eliminate bad theories, can force remaining theories to be more rigorous, can separate the serious ideas from the nonsense. But whether it ultimately answers the deepest mystery of consciousness? That's still an open question.

Which, honestly, makes this approach even more valuable. By narrowing the field and forcing theories to be testable, we're at least making real progress instead of just accumulating more books on the library shelf.

Why a Bookstore Kid Ended Up Here

Hoel's background is kind of perfect for this job. He grew up working in his mother's independent bookstore, surrounded by stories and ideas, which taught him to think deeply. Then he discovered science and realized you could actually test ideas instead of just debating them forever.

That's exactly the energy we need in consciousness research right now: someone willing to say "enough philosophical hand-waving, let's build a system that actually works."

The Bottom Line

We're living in an era where consciousness research is either exploding with potential or drowning in its own theories, depending on who you ask. Hoel is trying to be the bridge between those two states—taking the explosion seriously while actually sorting through it.

Will his framework be perfect? Probably not. Will it completely solve consciousness? Almost certainly not. But will it finally give us a way to separate real theories from elaborate guesses?

That's exactly what we need.

#consciousness #neuroscience #ai #scientific method #cognitive science #erik hoel #philosophy of mind