Discussion about this post

User's avatar
Mike Smith's avatar

This is long been my concern with the word "representation", it seems to imply something being presented to an inner observer (re-presentation). If we use words like "schema", "model", or even "reaction cluster" or "early dispositional pattern", it seems more evident this is actually part of the processing of a system, something we can imagine happening in a computer or dynamical system.

It doesn't surprise me that everyone is using "representation" to mean different things, since everyone is using words like "consciousness", "mind", or "emotion" to mean different things as well, often even the same person in the same conversation. This language ambiguity, I think, offers the impression of deep mysteries. When we use more precise language, mysteries remain, but they seem a lot less intractable, more conducive to scientific investigation.

Interesting post, as always Suzi!

Expand full comment
Wyrd Smythe's avatar

One challenge is that "meaning" is a high-level concept. Restricted, I think, to humans, an abstraction. We create meaning, it's not something out there we find. Meaning to you may not be meaning to me. It's a rabbit hole concept like "consciousness" or "representation" or "real". I'm not sure it's possible to define such nebulous concepts effectively. (Endlessly palatable for philosophers, though.)

Maybe a problem with representation is the pigeonholes "vehicle", "target", and "consumer". That works okay for external symbols, but as you say, "And the consumer is… who, exactly? [...] There’s something off here." The notion of natural signs seems more on target.

I thought about the way we train LLMs. The encoding that results from their training seems more aligned with natural signs — THIS experience causes THAT encoding — than with representations — symbols standing for experiences. In part because it's impossible to say exactly *where* facts are stored in an LLM. There are no concrete symbols, just a unified set of parameters. Like a unified set of trained neurons in a brain.

In software, an old decomposition approach is IPO — Input-Process-Output. As you point out, it's a general framing that applies to many processes, including many aspects of humans. I do think it applies to brains although, as with software, it's recursive. Each input, Process, and Output is itself made of IPOs, which also decompose to IPOs, and so on until you get to the most basic functionality. In brains, even neurons can be decomposed — synapses are their own IPOs (composed of biochemical IPOs). FWIW, I see the brain as more like an old analogue radio, a signal processor, than as a numerical information processor.

I wonder if the question of the homunculus is another version of the Hard Problem. How can clay have opinions? Why is this IPO system self-aware? I think to the extent a "homunculus" exists, it's the whole brain having that self-awareness.

Very interesting series. Looking forward to whatever is next. Have fun in Sydney!

Expand full comment
72 more comments...

No posts