11 Comments

You did it again, Suzi.

I haven't even heard of the homunculus fallacy before, and now I have at least a grasp of the concept after an enjoyable read.

I look forward to the dualism chapter.

In the meantime, I'm off to have nightmares about humans-within-humans-within-humans like trippy Matryoshka dolls. So thanks for that!

Expand full comment

Thanks so much Daniel! Yeah, those homunculi are pretty freaky!

Expand full comment

Brilliant piece. Further to it I would humbly point towards several essays by Raymond Tallis. I am looking forward to the next essay. Thank you, John.

Expand full comment

Thank you so much, John! I'm glad you enjoyed it. And thank you for the recommended reading. I'll do some digging :)

Expand full comment

You help me understand why John Searle rejects Daniel Dennet’s views on conscious AI so vigorously. It may be true that more basic neural configurations exist doing what they do just because that’s what they do and can be identified in the human brain. They form a mechanical layer that helps produce consciousness. Check. Such a view may lessen the homunculus fallacy in that there is no need to explain the infinite regress. But how does that view explain consciousness? All it says is we need a brain to be conscious. It answers the fallacy but explains nothing. It’s convenient isn’t it that this is exactly how AI works. A brain with basic and more complex parts is necessary but insufficient to explain consciousness. We agree that we don’t really know what human consciousness is and why we have it. What is the urgency of drawing the conclusion that AI has human consciousness when we can’t agree that human consciousness is a mechanical production. Mind is the consequence of brain, but it isn’t brain. Clearly, AI is a tool, not a being. We have lots of convergent and divergent ways we could make arguments for consciousness in AI. But there is no firm basis for stating that AI experiences consciousness as humans do. AI can mimic aspects of consciousness, but there is more to consciousness than language and logic. The danger in ascribing consciousness to AI is visible in the fear and frustration educators are experiencing because they believe AI does all the learning and hollows out the human. Much depends upon how the concept “learning” is construed.

Expand full comment

"What is the urgency of drawing the conclusion that AI has human consciousness when we can’t agree that human consciousness is a mechanical production."

Well put!

Expand full comment

Really enjoyed this Suzi!

Expand full comment

Thanks so much, Jem!

Expand full comment

I have to say, I laughed out loud when I clicked on the homunculus sperm picture (awesome!) and found a tiny baby inside with a head that looks a lot like a doggie butthole. So funny.

Thanks for this article. I'd heard of the homunculus fallacy, but I didn't know about the sperm aspect of it. It seems silly at first, but when you think about it, in a way it makes sense. We see birds lay eggs and out pops a baby bird. Why shouldn't our eggs or sperm have the same thing going on? (Leaving aside the doggie butthole homunculus head of course.)

Expand full comment

🤣 I never noticed the doggie butthole before! I can't unsee it now. And I will forever think of a homunculus as a little man with a doggie butthole head. I am forever grateful!

Expand full comment

You’re welcome! 😆

Expand full comment