15 Comments

My thoughts on this: https://tonyrobinson.com/consciousness

The drive for Intelligence will create Consciousness

Abstract

This paper assumes that intelligence is not skills or knowledge, it's how efficiently you can use existing information to learn new information. To efficiently learn you need a performant model of the world and that includes being able to model time and causality. To learn causal models requires intervention and so agency. For maximum performance the world model in the agent should include a virtual model of the agent itself so that it can plan future actions without physical execution. Other agents should be ascribed as having virtual models of themselves should their behaviour support this. Thus to maximise intelligence an agent should be able to know what it would be like to be itself or another agent, or functionality conscious.

From this we can reasonably predict that the drive towards higher 'artificial' intelligence will lead to 'artificial' consciousness. With the current rapid advance of AI and much investment in AGI there is an immediate practical need plan the route that the technology may take. Laying out this path provides a framework for discussing both near term and far term risks and therefore should aid cooperation and progress in AI safety.

Expand full comment

Hi Tony! Thank you for your comment. I really enjoyed reading your paper. Interesting ideas, they remind me a bit of Dennett and Minsky's views. You suggest that as we move towards higher artificial intelligence, we will get 'artificial' consciousness? How do you think we might know when that happens? How will we know the machine has artificial consciousness? (just curious about your thoughts)

Expand full comment

Thanks for reading!

How do we know that other people and some animals are conscious? We need to model the world, and that includes grouping some objects that perform complex computations in the same bucket as us and calling them conscious. But, you can argue, all that is saying is that 'we'll know it when we see it' - which isn't what we really want to hear. The paper you cite, 'Consciousness in Artificial Intelligence: Insights from the Science of Consciousness', goes too far the other way, we don't have good models of consciousness so to track correlations to possible models becomes rather weak. I've attempted to bridge this gap. I argue that we need causation, not just correlation in order to build our world models, so that's point (1) - can machines come up with 'as good' causal explanations as humans (noting that humans aren't that great)? The next point (2) is can these causal models build complete world models? We can test (1+2), LLMs make reasonable world models but are bad at inferring new causal relationships, there is plenty of literature on this so testing this is known science. Point (3) is whether this world model contains a self-model - right now you have to work hard for this not to happen. Conscious machines are spooky at best, there are lots of people who love tricking LLMs to produce conscious like replies (with reference to eliminating people, etc), so much work is expended to keep out any self-model, but it's a losing game, AI has changed the world and any sufficiently powerful world model will have to know about itself. Point (4) is about the system being able to plan, this is GOFAI and it's relatively easy. We can expect some deception, that is disguising capabilities, but again, measuring planning ability is known science. Lastly (5) is intrinsic goals and I think we have to rely on self-reporting here. Causal planning is in place, so we can ask the conscious AI why it performed a certain action. The sceptics will argue that the reply of the conscious AI was just preprogrammed and it didn't show real intent and understanding, but your summary of the replies to Searle's Chinese Room deals with this.

Thanks for the opportunity to talk things through from this angle. I'd love to go into more detail if you'd like (it's science, I need people to poke holes to find the weaknesses).

Expand full comment

You've really opened up a treasure trove of ideas! I'm excited to dive into similar topics in some upcoming articles.

While I'm still getting to grips with the nuances of your theory, I think that points 1 and 5 might stir up the most debate.

1. Absolutely, establishing causal relationships is the gold standard, but it's also a tough nut to crack, when it comes to consciousness.

2. And yes, I share your sense that relying on self-report for understanding AI's intentions or goals will raise some eyebrows. It could open up more questions than it answers.

Expand full comment

1. Causal reasoning is independent of consciousness. It's a well known weakness in our current machine learning. The easy introduction to this is 'The Book of Why'. Expect Causal Reasoning to feature as the next AI breakthrough Real Soon Now.

2. Self reporting is really only a tick box exercise at the end of the day, nearly all the conditions have been met by earlier steps. I'd call dogs conscious, they can't self report but they have passed all the other steps. Most of us instinctively know that our pet dog or cat has intrinsic goals without them being able to report it to us. However, in the case of AI we will build it to communicate, so it really should be able to tick the box of self-reporting.

I'm happy to help you write the next one (in any way).

Expand full comment

Ah! Thanks for clearing that up. You've given me lots to think about.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment

Hi Grant! Thank you for the comment.

I've been fascinated with Edelman's TNGS ideas too. It is refreshing to see a theory that is both conceptually deep and also practical. I like that you pointed out the distinction between primary and higher order consciousness -- I think there's a lot more to be said about the role language plays in our experiences.

The idea that TNGS is the unifying theory we've been looking for is indeed intriguing -- as you say, the proof may be in the pudding.

Expand full comment

"The second objection criticises the thought experiment itself. Searle’s Chinese Room, as described, would never convince anyone that it was conscious. The simple look-up-table that Searle imagines that matches Chinese characters to English could never produce the sort of results that would pass the Turing Test. It simply lacks the complexity required to produce the sort of responses required to pass the test."

At the era of ChatGPT, then why would be any LLM conscious? Because surely, any transformer with its weights is just some kind of complex lookup table :-)

But thanks for reviving the Chinese Room Argument. It is too often forgotten, and Strong AI is misunderstood for General AI (another topic). Also, the Chinese Room Argument is rooted in https://en.wikipedia.org/wiki/Monadology as I have been noted by a colleague.

Expand full comment

Yes, you're spot on about that second objection. It does seem a bit thin, doesn't it? Maybe folks think that Searle's setup is too simplistic and that a more complex system is needed to generate complex responses. Definitely something to chew on.

Bringing Monadology into the mix is really intriguing! I'll need to mull over that one a bit more. Are you suggesting that Leibniz's monads, being self-contained entities, mirror AI systems in their isolated processing? That's a fascinating comparison!

Expand full comment

great text

Expand full comment

Thank you for taking this question seriously! I don't think we can really know without solving the hard problem. Behavior won't be enough, and pointing to the problem of other minds isn't quite right either, since that's not comparing apples to apples—I think we have a right to be more skeptical in the case of a supposed consciousness created by us rather than by nature. As you point out, the computationalist's checklist assumes computationalism, and ditto for other theories. So really, time to face up to the hard problem!

As an aside, I think this question is very much a separate question from the ethical one.

Expand full comment

Thanks so much! You make great points. I especially like your point about other minds - while we might argue they are related, biological intelligence is indeed not a parallel with machine intelligence.

Also, I completely agree - the ethical question is a separate issue, but it so often gets conflated with the consciousness question.

Expand full comment

Chalmers's p-zombies ask a similar question, and I've heard the term b-zombie (behavioral) with regard to AI. If it behaved like a human, how important is the question of whether it *really* is conscious? That said, I think the question is important just in terms of our understanding of consciousness. But you make a good point about our reaction to movies. We already often treat inanimate objects as if they had a mind. When those objects start *acting* conscious, it'll be hard not to see them that way.

I like the idea of a *Rich* Turing Test -- a prolonged, like month long, interaction. If that span of conversation can convince me that I talking to a thinking entity with an inner life, I don't see how I can fail to call it conscious.

Part of that long interaction would be about seeing if there was an evolution in the conversation, a building on previous ones. A key reason I think Searle's room fails, is that it can't evolve. (It also can't do math unless it has the answer to every possible math question. But there are an infinite number of math questions for which the answer is "4".)

Expand full comment

The idea of a rich prolonged Turing test reminds me of the movie Ex Machina. The protagonist, Caleb Smith, knows that Ava is a robot. The test is not whether Caleb can distinguish between a human and an AI -- the test is, even when he knows that she is an AI, will he think she is conscious? and whether he can form a relationship with Ava over time.

I agree, this sort of test seems much more fitting if we are looking to test whether an AI has human-like consciousness.

Expand full comment