Discussion about this post

User's avatar
Mike Smith's avatar

Very interesting Suzi!

It seems like we could say that LLMs have an alternate grounding to the one we do. Their grounding is actually in us, in our social dynamics. So I guess in a way that does mean we could say they have indirect grounding in the world. Although like the old tape recording copies passed around, it seems like something gets lost in each layer of indirection.

I'm not clear why sensorimotor processing isn't part of the answer for more direct grounding. In us, referential grounding seems like a result of sensorimotor grounding. If we put an LLM in a robot and made it an agent in the world (which I realize is trivial to say but very non-trivial to implement) then through its sensorimotor mechanisms, it would gain more direct grounding. It might even make sense to say it has a world model. This is why I find self driving cars and other autonomous robots much more interesting than LLMs for steps toward intelligence.

But as I noted on your last post, the main thing I think to focus on is the causal chain. Once we do, to me, the philosophical issue goes away. Not that we don't still have an enormous amount of learning to do on the details.

Expand full comment
James Cross's avatar

Stochastic parrots is what they are and is all they will ever be in their current forms.

Representations latch onto reality by imitating it.

Expand full comment
89 more comments...

No posts