Discussion about this post

User's avatar
Mike Smith's avatar

In a lot of contemporary software development, the agile methodology now mixes use and development. It's a tacit recognition that our ability to develop using the old waterfall method (documenting requirements, followed by specs, coding, testing, production, etc) was always far more limited than anyone wanted to acknowledge. Now minimum viable product is developed and put in front of the users, who provide feedback which is incorporated into further development. At some point it's good enough for real use, but the feedback-development cycle continues (usually for some contractually specified number of cycles or time period).

It's interesting that AI training hews closer to the old waterfall method. There's a lot to be said for being more "organic" about it, although I suspect there are technical challenges still to resolve to make it possible, such as allowing AI to update during use could lead to unpredictable results. The systems we have remain a lot more limited than the hype implies. Not that they won't eventually get there.

I've long found our reference to ourselves as a "general intelligence" somewhat dubious. It seems like we're a survival intelligence. Most of the fears of developing AI is that it will somehow turn into a survival intelligence of its own. But a survival intelligence seems like a very specific type of architecture. There are some dangers associated with tool intelligence, but the idea that it will accidentally become a survival intelligence seems similar to worrying that a word processor will accidentally become a travel reservation system.

Excellent post Suzi!

Expand full comment
James Cross's avatar

I proposed a Naked and Afraid test for AI once.

https://broadspeculations.com/2023/12/21/naked-and-afraid-ai-test/

I'm guessing the standard AGI tests are highly oriented to a subset of human capabilities. At any rate, I thought I read AI is already performing at a 80-90% level AGI. Going past AGI may require AI to feed its own data back to itself, but that presents the challenge of subtle biases and idiosyncrasies becoming reenforced. The end result could be dangerous, useless, or both.

Expand full comment
66 more comments...

No posts