59 Comments
User's avatar
Mike Smith's avatar

When it comes to communicating with aliens, I suspect when we finally do encounter them, we'll discover how impoverished our imagination has been on what forms life can take. We all live in the same universe with the same laws of physics, but those laws probably allow wider variation than we can imagine based on our experience of one biosphere.

But I think the Pioneer plaque isn't aiming to communicate with just any lifeform, but one that has managed to build their own civilization, in some manner close to how we define "civilization." That implies a level of selection to get to that point, one that it's hard to imagine happening without some form of symbolic thought and communication, or at least the ability to recognize significant patterns. But I am aware that even saying that may reveal a poverty of imagination.

I see the grounding of internal representations being about causal chains, from the represented to the representation, and the way it's used by the system to act toward the represented. To me, that's what Searle overlooks. That this is an issue for both engineered and evolved systems. There's nothing about individual neurons or even neural circuits that have meaning. They only have that meaning as part of their larger causal context. The same is true for code running in a computer system.

Interesting discussion, as always Suzi!

Expand full comment
Suzi Travis's avatar

Thanks, Mike!

This is where I’m at too. I keep circling from it’s hard to imagine a life form that understands science without symbolic thought to maybe I’m just limited in what I can imagine.

We tend to think along similar lines on this stuff — but just out of curiosity, what’s your take on metarepresentations? Do you see them as a distinct kind of representation, or just a useful way to talk about systems like us?

Expand full comment
Mark Slight's avatar

What I want to know is what you and Mike think about the representation "metarepresentation".

Expand full comment
Suzi Travis's avatar

Ah! Such a deliciously tangled question.

I’d say it depends a lot on how we define representation. There’s a common tendency to reify aboutness — to treat representations as if they inherently possess aboutness. So if we’re already wary of reifying first-order representations, then meta-representations can look even shakier — like we’re stacking assumptions on top of assumptions.

But if we treat representation more pragmatically — as something we ascribe when it helps explain a system’s behaviour — then meta-representation starts to feel a lot less spooky.

We just have to be careful not to treat representations like little inner signs with magical powers. And we shouldn't multiply them — or ‘meta’ them — unless they earn their explanatory keep.

Expand full comment
Mark Slight's avatar

Ah yes, exactly what I think (I think). It's all about the causal chains. I'm a 'functionalist' about symbols and representations, with no distinction between 'inner' and 'outer' representations (I'm curious why you say 'inner' specifically?).

As you say, code is exactly the same. Or an image format or codec or anything else that 'means' something only in virtue of what it 'does' in a particular system.

A counterintuitive consequence of this is, it seems to me, that all of the following are also symbols/representations: spoken language (auditory symbols), pheromones, haircuts and hormones (edit: also DNA, RNA, radio waves). Although with varying meanings in various recipients.

Back to the inner/outer distinction. Photographs, for example, are outer representations from the individual perspective, but inner representations /symbols from a group/societal perspective. Just as one piece of visual imagery can align and organise the whole brain as to make the whole individual engage in a particular activity, so can a photograph from one person (or a sentence or a gesture) align and organise a group of people or a country to achieve a specific task.

Random ramblings need to stop now. Thank you.

Expand full comment
Mike Smith's avatar

Good ramblings! I think I'm onboard with everything you say here.

On "internal", I originally wrote "mental" there, but then decided I wanted to make it more general for agents overall, so changed it so it could apply to both organisms and machines.

Expand full comment
John's avatar

Wonderful. I have no thoughtful contribution to make, but you should know that I was around 15 years old when this plaque started its voyage and it was emblematic then (and echoes down the years now) of a spirit of optimism and faith in technological progress which I and my peers were caught up in. It’s been very nice to revisit it and your use of it to discuss communication. Thank you, Suzi.

Expand full comment
Suzi Travis's avatar

Thanks John! There is something beautifully human about pinning a message to a spacecraft and tossing it into the void. I’m glad the essay gave you a moment to revisit that feeling.

Expand full comment
margot lasher's avatar

I'm not a scientist and the symbols on the plaque are all meaningless to me except for the sketches of the two humans. But the raised hand of the human, to me, represents 'stay away from me'. That was my first thought, and remains after studying the sketch. I think they should have been much more thoughtful about what they represented.

Expand full comment
Drew Raybold's avatar

I agree that culturally-grounded symbols and gestures should probably have been avoided, but there is also a physiological message here, about the number of fingers on a normal human hand. The count is five, no more, no less. Five is the number thou shall count, and the number of the counting shall be five. Six shall thou not count, neither count thou four, excepting that thou then proceed to five. Seven is right out... clearly, the message here is a remarkably prescient one, intended for generative AIs!

Expand full comment
Suzi Travis's avatar

Oh! I love Monty Python! This comment made my day :)

Expand full comment
Suzi Travis's avatar

Thanks for sharing this Margot! It’s such an interesting reaction! I see it like that too now. Actually, it's difficult to see it any other way.

While researching for this essay, I came across some of the controversy that surrounded the plaque. Apparently, a lot of people were upset that the man and woman were depicted naked. Some called it “sending smut to the stars.” Others, of course, complained that the images weren’t explicit enough to be useful. Most of the debate seemed less about aliens and more about keeping things “tasteful” for the humans here on Earth.

The whole things seems a good example of how much our communication isn’t really about trying to be factually understood. Like the raised hand. Even something that might be meant to be universal can land in a completely different way, not because of facts, but because of the assumptions, emotions, and culture we bring to it.

Expand full comment
Jim Owens's avatar

Ah, but how would they be clothed? Business suits? T-shirts and Love beads? Given the rest of the images, I think perhaps lab coats.

The symbols reveal what a culture, or a sub-culture, cares about or takes to be important -- often revealingly so. And I wouldn't be surprised if someone called the depiction of man and woman sexist. Because it is.

Expand full comment
Suzi Travis's avatar

Oh, yes, there was much criticism that! There were two main criticism -- it reinforces traditional gender roles. And the woman’s anatomy was more modestly depicted than the man’s, which, reflects our cultural discomfort with female sexuality.

Expand full comment
Jim Owens's avatar

The message of the raised hand is "If you drop by, you'll want to talk to the man, he's in charge." Meanwhile, the woman's whole physical attitude says "I'm with him." She's not even making eye contact. Today it's painfully obvious; back then, apparently, not so much.

Expand full comment
James Cross's avatar

That AI can view brain patterns and determine something about thoughts or presented visual image tells me that the patterns are significant. AI still doesn't do this easily or without a lot of training, but that it can do it at all would suggest there is a relationship between pattern and mental activity. Of course, the relationships could just be the connectome.

Any thoughts on this? They claim to have identified dynamic neural patterns, grounded in higher dimensional shapes, that correlated across animals performing the same task

https://www.nature.com/articles/s41592-024-02582-2

Expand full comment
Suzi Travis's avatar

Love this line of thought! Yes, the fact that decoders can pull recognisable images out of fMRI tells us the activity patterns matter.

Several labs have shown that deep models can reconstruct what a participant was seeing (and, more crudely, what they were imagining) from fMRI.

And, yes, that requires a lot of training. Most of the research suggests that it doesn't only require hours of training. Each decoder has to be trained on hours of brain data from the same person. Accuracy plunges if you switch to a new subject, modality, or task. And it doesn’t read “thoughts” in the general sense: So far we can infer low-level visual content, broad semantic categories, or crude intentions (e.g., hand-movement direction), not rich inner narratives.

I think we are a long way from generic ‘thought readers’.

I think the connectome could help, but I don't think it will get us too far. Structural wiring constrains what patterns are possible, but the decoders that are being built rely on dynamic population activity — so the “neural code” unfolding in time — not just the anatomical structure.

This is what they did in that Nature paper.

It shows that when different animals solve the same task their population dynamics trace similar low-dimensional shapes inside a much bigger neural state-space. That’s about shared computation, not just shared wiring. This is a super cool finding. But, the recordings are tiny snapshots of cortex or hippocampus, and the tasks are tightly controlled, so I think we should be careful not to over-extrapolate these findings. It's early days. But a fascinating step, though. It fits with my hunch that patterns, not just the connectome, carry the story.

Expand full comment
James Cross's avatar

I don't think the connectome by itself explains it either, but it is essential to what the brain does. The complex patterns of neurons that arise from the connectome, what I call the "super-connectome," show fluidic dynamics and turbulence much like smoke from a grill, starling murmuration, or even people in large crowds.

Each unit of the system acts based on information and forces from other units but the process generates complex structures from the collective action that need to be modeled in higher dimensions. The circular feedback from the super-connectome and the connectome may be the critical dynamic of conscious states.

We are far off from understanding this "language" but we might soon have a few "phonemes."

Expand full comment
Suzi Travis's avatar

Ah, yes! Once the neural “traffic” is moving, you get those flock-like patterns: population waves, critical cascades, even turbulence signatures that some labs have likened to atmospheric flows.

Your comment reminded me of a something I've been thinking about lately: feedback loops.

You suggest a circular dance between the fixed connectome and the emergent super-connectome. Do you picture that as slow structural rewiring nudged by fast dynamics (like Hebbian updates)? Or more as real-time gain-control, where the same hardware routes signals differently on the fly? I wonder if the answer is: both, but at different timescales.

Expand full comment
James Cross's avatar

Not at all sure how to answer the questions. But thanks for making me think about them.

In the case of particles, birds, or people, the unit acts on information from surrounding units, then moves towards or away from other units. In the case of neurons, they can't move but they can increase or decrease connectivity/influence based on information from other neurons. They can, in effect, move nearer or farther in relation to other neurons. This seems like the essence of plasticity. The causality in the complexities of the structure may mean that nearer and farther can't be treated simply spatially. In higher dimensions, objects could be distant in three-dimensional space, but close in higher dimensions.

Expand full comment
Max More's avatar

Along the same lines as eliminativism (for which I have considerable sympathy) but more moderate, this is revisionary materialism in which some mental terms are abolished because they don't reflect any real entities, and other mental terms that get partly or largely reduced to a real thing/pattern.

Expand full comment
Wild Pacific's avatar

Language aspect of this is important. However, the topic is more interesting if we really detach true language (“dog”, hand gestures) that is all syntax, as you said, from common “hardware” experiences.

There is hardware, after all.

If we hurt someone, as punishment or attack, we communicate. A lot! Also if we please someone physically, especially certain bits ☺️ we may expect a positive connection.

Animals figured this one out and developed pain/pleasure systems before advanced consciousness allowed us to communicate with constructed syntax. I believe life itself was a first step in this stimuli-based activity.

Plaque was an attempt to find universal-hardware-adjacent something. Scientists consider prime numbers there too. Maybe we will discover more as our new tech is developing.

Expand full comment
Suzi Travis's avatar

Ah! Great comment. It raises such an interesting question:

Are there forms of meaning that we have simply because we have the kind of hardware (or wetware) we do — and that other sensing systems might also have, just by virtue of having similar hardware (wetware) and being embedded in the same physical world?

That idea — that some kinds of meaning might be grounded not in shared language, but in shared conditions for existing — is one that keeps coming up in these sorts of discussions. Plenty to dig into here.

Expand full comment
Wild Pacific's avatar

Yes, I believe that there is somewhat clean level where we can call things hardware.

One thing that I believe is a beautiful “hardware” story:

1. Star pressure produces elements heavier and heavier until it becomes an iron star. Iron is the last atom there, as energy to break it is more that it would release.

2. Iron stars explode, they are fun like that. 💥

3. Nature of iron atom is that it can carry or drop oxygen atoms easily, so we have a quartet of iron atoms in each hemoglobin cell.

If this could be diagrammed well, this hardware language would explain a very fine detail of advanced life.

This requires more that a postcard, I suppose.

Expand full comment
Andrew Sniderman 🕷️'s avatar

I too was fascinated by this when I was a kid. Must’ve been some PR around it. I think it was when I read Three Body Problem that made the connection that we shouldn’t tell the angry aliens where we live. But to read your version, we don’t need to worry - they won’t have any idea what we’re on about :)

Expand full comment
Suzi Travis's avatar

if we take seriously how hard it is to interpret the plaque, maybe we don’t need to worry so much. Any truly angry aliens might spend centuries just trying to figure it out.

Or who knows — maybe symbolic understanding is more universal than we think, and we’ve just accidentally sent them their version of a declaration of war. 🤔

Expand full comment
Mark Slight's avatar

What's this all about?

Jokes aside, great piece!

I haven't read the intentional stance or anything else about aboutness but this grounding problem naively sounds to me an awful lot like the Hard Problem or the problem of causal chains for real agency. Like, for those who think it's a problem: what kind of grounding are you expecting? Magical links between symbols and what they symbolise? Dogs are not even a 'thing' at a fundamental level. Read Real Patterns, and get over it. Maybe I'm a 'functionalist' about aboutness, if that makes sense (there's probably another word for this position).

Anyway, very interesting stuff. Looking forward to part II. I'm convinced it really will be about something, with no good reason to question that.

Expand full comment
Suzi Travis's avatar

Haha—love it!

I can see why the Hard Problem and the grounding problem look similar. The Hard Problem asks how physical processes give rise to subjective experience. The grounding problem, by contrast, asks how physical states — marks, spikes, symbols — come to be about something. They feel alike because both present what look like explanatory gaps.

But many people would say those gaps only seem mysterious if we hanker after a special inner glow for qualia or an intrinsic semantic glue for symbols. Drop those demands and the puzzles lose their spooky aura.

So yes, positing a magical link between symbols and what they symbolise would add unnecessary metaphysics — and most naturalistic accounts of grounding work precisely by denying that sort of magic. Dennett’s Real Patterns is a classic statement of that view, but it’s not the only way around the problem.

And you’re right: being a “functionalist about aboutness” is very much a thing. It often goes by names like pragmatist, use-theoretic, or interpretational accounts of content — all squarely in the Dennettian camp.

Expand full comment
Mark Slight's avatar

Great! Very clarifying. Thank you. Here's hoping to see other pragmatist ways around it in your next post! I need to expand my simple-minded Dennett-fanboyism.

This post, and your previous one, has made me fascinated in the distinction between representation and symbol. I'm struggling to find a clear one, and perhaps there isn't one. This also makes me think of the fascinating difficulty of defining "information". A vague suggestion I read somewhere is "a difference that makes a difference", and I like that. Anyway, again, thanks!!

Expand full comment
Suzi Travis's avatar

Thanks, Mark!

If you’re drawn to a computational functionalist view (which I think you are), then, it’s hard to go past Dennett. He’s masterful at sidestepping metaphysical tangles.

You’re spot on about the slipperiness of terms like representation. I’d say a symbol is a narrower kind of representation. All symbols are representations, but not all representations are symbols. Symbols are usually discrete and rule-based — like words or numbers — while representations can be broader, including images, patterns, high-dimensional vectors, or neural activity.

Representation, is often thought of as similar to ideas, thoughts, and concepts. Each has its historical moment — Hume had ideas, Descartes had thoughts, Kant had concepts, and today’s cognitive scientists favour representations. But they’re often used interchangeably, which makes it easy to miss when we’re switching frames mid-discussion.

And yes, I like that definition too. “A difference that makes a difference”. It isn’t perfect, but I think it is helpful.

Expand full comment
Mark Slight's avatar

Thank you for this!

I think that's how I tend to think about symbols and representations too. But it seems to me the distinction is very much a perspectival thing. For example, the same letter symbol can be represented in many different ways - visually in different fonts and in different media, and we can even use letter symbols without any visuals at all - we can talk about them with verbal representations of visual symbols. Smells and pheromones are not really different, except that they are not so integrated in our 'conceptual mesh' as in other animals.

Much the same story can be told about elementary particles - atoms and molecules - nucleotides - DNA - RNA - proteins - organelles - cells - organs - organisms - families - societies. Or real dogs - internal and external human-made representations of dogs, and LLM representations of dogs.

It's arrows of aboutness all the way down!

Just thinking aloud. Thanks again!

Expand full comment
Suzi Travis's avatar

Hey Mark!

Yes, good point — perspectival, definitely. But not arbitrarily so. Function and context matter.

And I had to smile at “arrows of aboutness all the way down” — great newsletter name, btw.

I’m totally with you, so long as by aboutness we mean something like a pattern that helps a system do something useful — not some kind of intrinsic meaning glow. The first kind keeps us grounded. The second one drops us into infinite regress pretty fast.

Expand full comment
Mark Slight's avatar

Thanks. Struggling with newsletter name. Trying on a new one.

Expand full comment
Jim Owens's avatar

What if, instead of starting with physical processes and asking where consciousness comes from, or starting with symbols and asking where meaning comes from, we started with consciousness and meaning, and asked how representation (physical or symbolic) arises? Would the gap be such a problem then?

Expand full comment
Suzi Travis's avatar

Interesting idea! Can I ask a question back?

If we start with consciousness and meaning, are we imagining those as separate from physical processes? Or as somehow prior to them? Or is the idea that only consciousness exists — that there is no physical?

I ask because it seems like if we keep consciousness and the physical as two distinct kinds of things, flipping the direction of explanation might not get us around the problem — it might just reframe it.

Expand full comment
Jim Owens's avatar

I think "prior to them" is closest. If physical processes are understood as relationships (a not unpopular view these days), it's a short step to think of those relationships in terms of intentionality (in the philosophical sense of the word). I would say a "short and easy" step, but it's not that easy if we're assuming the universe starts without intentionality and only develops it later (more or less what you're talking about here). On that view of the universe it's "counter-intuitive," to say the least. But maybe we should look at that assumption.

Expand full comment
Mark Slight's avatar

Inviting myself in here, because why not, you don't have to reply :)

Instinctively I like various forms of panpsychism and idealism, I just don't think either of them are coherent.

First, what does it even 'mean' that the universe starts with intentionality? What is 'universe' here ? Is it a place? And what is 'intentionality', or 'mind' in such a proposal? I have hardly ever seen even an attempt att trying to explain what is meant by that? I have some sense of what 'mind' means when I ascribe it to humans and animals, but that is based on what we do, how we talk, how we are agents with internal modelling. I can no less imagine 'mind' as fundamental, when I think about it, than I can imagine 'human-ness' or 'humour' as fundamental. That's gonna provoke "you just have a lack of imagination" response from some, but well yeah, then at least attempt explaining it!

Second, I think you need some sort of proposal for what it would mean that intentionality is fundamental and then symbols and representations emerge somehow.

Structural realism handles these problems beautifully. It doesn't explain why there is physics and matter and why the laws are what they are. But they only need that single starting point, and it can be described mathematically. It's clearly definable. And then we can make sense of how and why complex agents invent words such as 'about' and 'consciousness'.

Expand full comment
Jim Owens's avatar

I sincerely appreciate not having to reply -- I don't know how people like Suzi and Mike Smith do it! -- but I'll have a go anyway.

"Universe' Just refers to whatever there is, assuming there is something rather than nothing. Intentionality implies something that notices, something that is noticed, and their inextricability.

Is there intentionality in the universe? I think so, otherwise we can't have this conversation.

Where does it come from? I don't think structural realism handles this problem beautifully. It doesn't do a very good job of explaining where agents or intentionality come from. They call that the "hard problem."

Now, the hard problem is a bit like Jastrow's duck-rabbit: some see a duck, some see a rabbit. You can see the universe one way, as having relationships but no agency or intentionality until one day they appear (or an illusion of them appears); or you can see it another way, as having relationships with intentionality from the start. You can't really get below either view to "explain" it; you can only acknowledge which one you're going to take for granted, and then consider what it helps explain. It's like a flipping a switch of imagination. (Charles Taylor refers to the "imaginary" of a culture.)

For an account of how representations emerge from intentionality, or "prehension," you could look to the work of Alfred North Whitehead. It's an extremely rough and rather lonely start, but it's a start. Today, with the structural turn, people are more open to an approach that recognizes the primal significance of relationships. But significance without intentionality is problematic -- so let's include it in the mix, I say.

Expand full comment
Michael Pingleton's avatar

These are some interesting musings on the fundamental concepts of intergalactic communication. Us humans started with rudimentary symbols, then evolved our way to more complex orthographies such as alphabets, abugidas, and abjads. I do find it interesting how we tend to fall back to prior stages of evolution in order to enable communication with people of other demographics. For example, we tend to use symbols instead of words for some forms of signage.

In the case of talking to aliens, we have no idea what they're like. Therefore, we might need to assume a worst-case-scenario in this regard, so we fall back to our earliest stages of communication by using such a plaque. The plaque does seem somewhat caveman-esque to me. However, even the idea of the plaque might be a bit optimistic; the grounding problem might present a challenge here.

Nice work here; I look forward to your next essay about the grounding problem!

Expand full comment
Suzi Travis's avatar

Thanks, Michael!

Expand full comment
Wyrd Smythe's avatar

Ah, cliff-hanger ending. Just as it getting good! 😄

Other comments have touched on excellent points, so I'll just tender some random thoughts that occurred to me while reading…

The odds of any putative alien species finding the Pioneer probes (or the Voyager probes with their Golden Records) are about as close to nil as can be. Space isn't just big or even huge, it's vast beyond comprehension. Pioneer 10 is ~136 AU; Pioneer 11 is ~115 AU. Voyager 1 is the most distant at ~167 AU, and Voyager 2 is ~140 AU. For reference, the Oort cloud surrounding the Solar system extends from 1,000 AU to over 100,000 AU. So, all four probes have barely left the neighborhood. They have many millions of years before they'll reach the closest stars.

One thing about the Solar system diagram: Saturn's rings aren't forever. They're thought to be roughly 100 million years old (so very recent in terms of the Solar system) and may only last another 100 million years. I agree with what others have said about the raised hand on the humanoid male. (My question is why only the male?) There is also that just because life on Earth comes in two sexes, this doesn't mean life evolved on other worlds would necessarily find the concept meaningful.

I do agree with what's been said about any spacefaring species capable of finding these probes (in many millions of years) would almost certainly need language capable of accumulating the knowledge necessary and of communicating between workers involved in space projects.

Very true about advertising our presence! There is the "Dark Forest" notion that smart species do not announce their presence. But that assumes a galaxy teaming with, not just life, but hostile life. (The "Three Body" trilogy by Liu Cixin is all about this.) But there's a simple way to consider the odds of intelligent life that puts the odds at ~10²⁴ to 1. Compare this to the 10¹¹ stars in this galaxy or the 10²² stars in the visible universe, and it seems that we might be rare indeed. Odds are that we're alone in this galaxy.

Expand full comment
Dave Slate's avatar

If there is intelligent life out there, I think they are much more likely to detect our existence from a century of radio and TV transmissions (as in the film "Contact") than from encountering any of the Pioneer or Voyager probes.

Expand full comment
Wyrd Smythe's avatar

That opening scene in “Contact” is so cool. Love the movie. (And the book even more.)

Those aliens will need either to be very close or to have incredibly sensitive sensors because of the inverse-squared rule. Signal strength drops by the inverse-square of distance, so by the time the signal leaves the neighborhood of the Solar system, it’s all but indistinguishable from the background noise.

But a big enough, sensitive enough (beyond anything we can build) antenna might pick up a signal. Unlike the background noise, our broadcasts have plenty of structure to them, so might be sensed out to a few hundred LY or so.

Our space probes, though, after the millions of years it’ll take for them to get anywhere will be indistinguishable from rocks. And too small to show up on anything looking for dangerous rocks. I really think it’s a pipe dream to think they’ll ever amount to anything aliens will ever find.

Expand full comment
Eric Borg's avatar

I’ve got to agree with Wyrd. The prospect for any advanced civilizations to ever come across us, is ridiculous. Even if all planets were just as teeming with life as ours happens to be, the odds should still be statistically horrible. It took 3.7 billions of years of life for space ships to emerge on our planet, and that should soon end given the rise of our ability to destroy ourselves. The next civilized technological species here might not evolve for another million years, or if we set back evolution enough, maybe a billion. So even for planets like ours, the technology that we display should be quite rare.

Regardless of that, imagine what we’d think if we encountered an unmanned spaceship from elsewhere. Mainly we’d be impressed that this must have been built by a civilized technological society. On the markings, sure we’d presume it was symbolic and so make guesses about meaning. But we’d also consider these aliens quite daft for not instead sending an assortment of high resolution photos about them. Are photos symbols? Since I can’t live in a photo of my house though I can live in my house itself, they must be representations in at least this sense. Just very good ones.

On the mind working by creating representations, I’m not surprised by such speculation given how anthropocentric we tend to be. Notice however that the source of our representations, or human language, is thought to have evolved within the last couple hundred thousand years. That would seem to mean that mind emerged very recently, and since no other animals are known to have developed languages, our minds should be unique. Instead of mind working by means of representations, the opposite seems to make more sense, or representations working by means of mind. And how should mind work? I consider it to be a value driven element of the brain.

Expand full comment
Jim Owens's avatar

If I recall correctly, we did send a phonograph record. -- whoops, sorry, that was the Voyager probes.

Expand full comment
Eric Borg's avatar

Ah, well as least they started trying a bit harder to indeed communicate (should the possibility arise, not that it will). So the thinking must have been that a phonographic record could endure a long trip, as well as that aliens would grasp how to work it? Ha! Photographs ought to be informative for any being that detects light in the range that we do, or could at least adapt such potential information to their own radiation standard. I guess the trick would be imbedding photos in something that preserves them for millions of years.

To be clear, I have no doubt that there have been and will be countless civilizations at least as advanced as ours. This is mandated by the enormity of spacetime. But all this sci-fi crap about different civilizations meeting each other, or even having evidence of each other, just isn’t evidence based. Yes I’ve noticed Wyrd Smythe to speak up about this before. But do the vast majority of people in the know keep silent because they don’t want to effectively be the jerks who tell everyone “Santa Claus doesn’t exist”? I don’t know. Apparently there are people in academia called “astro biologists”, and tasked with figuring out what it takes for life to exist, either here or elsewhere. I wonder what percentage of them effectively “believe in Santa”?

Expand full comment
Dave Slate's avatar

Eric Borg wrote:

"On the mind working by creating representations, I'm not surprised by such speculation given how anthropocentric we tend to be. Notice however that the source of our representations, or human language, is thought to have evolved within the last couple hundred thousand years. That would seem to mean that mind emerged very recently, and since no other animals are known to have developed languages, our minds should be unique."

First, a caveat about my comment: my formal education was in physics, and my work experience has been mostly in computer programming. I have little formal training in neuroscience, biology, or philosophy. That said, your comment seems a bit too anthropocentric. Human language does make us somewhat different from other animals, but not that different, and I think that our minds are not so unlike those of the great apes or even cats and dogs. If they were, we would not be able to communicate with them as well as we do. I suspect that their minds also make representations of the phenomena they encounter, even if they're not expressed in human language (the uniqueness of which has of course been vigorously debated by Noam Chomsky and others).

Many decades ago my family had a couple of tom cats. We got them their vaccines but never had them neutered, so they spent a large fraction of their time outdoors, roaming around the neighborhood and probably helping to increase the cat population. One, who was large, friendly (at least with us) and intelligent (based on his ability to solve everyday life problems), stuck with us for about ten years. Cat owners have sometimes reported that their cats tried to "talk" to them in an imitation of human speech. It's easy to chalk up such claims to a natural tendency of people to anthropomorphize their pets, but one day our cat demonstrated the same kind of behavior and convinced me that many of those claims were probably valid. As I recall, my parents and I were sitting side by side in our living room. Our cat came up to us, sat down facing us as if he were about to address an audience, and began to yammer away in what really sounded like an attempt to "speak our language", even though it was of course unintelligible. The vocal sounds he made were quite different from his usual vocabulary of meows. I really believe that he was making a futile attempt to communicate with us in the manner that he had long observed that we humans did with each other. Eventually he gave up and wandered off, and I don't remember him ever repeating this behavior.

Expand full comment
Eric Borg's avatar

Dave, I fully agree that cats and especially dogs are often smart enough to both know that we speak with each other, and can sometimes can even participate. I think it’s pretty standard in many dog families that one must never say the word “vet”, or even spell it out since this bums them out so much. Anyway my point was that only we evolved such that language would become an essential element of our function. Therefore we tend to think about things in highly representational ways. So regarding the notion that “mind works by means of representations”, I guess my skepticism kicked in since it shouldn’t make sense that millions of years of minds would function on the basis of a much later evolved human tool. But then if one merely observes that no creature can possibly “see reality in itself”, and so all vision and other senses can only provide representations rather than what actually exists, well yes, I can’t dispute that.

Anyway since you’ve mentioned your physics background, and obviously also enjoy the stuff here too, I wonder if you’d be interested in my own project? As a college kid I was utterly unimpressed by mental and behavioral sciences, including philosophy, and even though I was incredibly interested it associated topics. Though these fields seemed pathetic to me, fortunately physics gave me something that I could respect. My goal now is to potentially help the fields that I still consider most interesting, to also become respectable.

Yesterday I did some things that I think ought to help my ideas become far more accessible than they were, and essentially with AI podcasting. So if you get a few minutes I wonder if you’d give this new material a try? At the end of the following post, though before the comments, I’ve added four audio files. If it’s simply not your cup of tea then obviously don’t waste your time. But maybe it is…

https://eborg760.substack.com/p/post-3-the-magic-of-computational

Expand full comment
Andra Keay's avatar

Another very interesting essay!

When I was a child, my dream was to go to space and learn how to communicate with aliens. Over the years, I replaced that dream with the dream of understanding how our communication technologies interact with us. Which leads directly to working with robots - ironically the most 'human-like' of our technologies, while also being the most alien. Fun times!

Expand full comment
Suzi Travis's avatar

Oh! I’ve always found robotics fascinating — and I have quite a few friends working in the field. It's such an exciting area to follow.

Expand full comment
Gabriel Robartes's avatar

I see your point. It's difficult enough to untangle ancient cuneiform and that's transmitted from the same species with a comprehensive set of biological needs and drives in common!

Expand full comment