Sunday, August 14, 2005

Like, Radical, Dude

posted by Tim Walters @ 8:55 PM

Joel Garreau, Radical Evolution. A nice little book about the Singularity. He divides the current thinking into three strands, each with its own spokesmen: the Heaven scenario (Ray Kurzweil), the Hell scenario (Bill Joy, Francis Fukuyama), and the Prevail scenario (Jaron Lanier). While he's overtly neutral, he seems to be a bit inclined to the last of these, as am I. Given the number of strands converging on superhumanity (Garreau calls them GRIN technologies, for genetic, robotic, information, and nano technologies) it seems likely, for better or worse, that huge changes are afoot, and fairly soon. But both the Heaven and Hell scenarios seem overhyped to me. I'm not convinced that we really have any good idea how to create sentient computers, even when we have sufficiently powerful hardware; likewise, no one really knows how to make universal nano-assemblers, or whether it's even possible. History would seem to indicate that muddling through is more likely than transcendence or destruction.

5 Comments:

At 11:55 AM, Blogger Todd T said...

I haven't read anywhere near as much I'd like about this topic. How useful a guide is history?

Re sentient machines: I'm sure you know that very few lines of code are required to make a robot that mimics a cockroach quite convincingly. Why should it be beyond the pale to create something that mimics sentience fairly well in a range of areas?

What I'm less sure about is the value of sentience, over and above powerful computing. If you want something to helm an interstellar craft or perform internal medicine or what have you, must it be sentient, or just able to solve a lot of hard problems?

 
At 10:32 PM, Blogger Tim Walters said...

I've heard the cockroach thing before, but I don't think I agree. What it seems to mean is when you turn the light on, it scuttles away. That's easy enough to code (at least the tropism; the scuttling is a bit more complex), but I would argue that it's only a fraction of the cockroach's repertoire. I haven't heard of anybody building a robot that can survive for months in a hostile, predatory environment, find its own food, and reproduce. Which is probably just as well.

I definitely think AI is possible--I just think it's quite hard.

The second question is a good one, but at least some of the common Singularity scenarios (uploading human consciousness, building a superintelligent "sysop of reality", etc.) explicitly require machine sentience. My hunch is that anything that needs to predict human behavior is going to need to be at least as smart as a human.

I've since read an on-line document that gave some of my anthromorphic ideas about AI a good spanking, though. I'll blog that soon.

 
At 5:05 AM, Blogger Todd T said...

I may be all wrong about the meaning of sentience. If it means human-level intelligence, then I'm not making much sense. I had thought it also meant self-awareness. It's that part that I have more trouble seeing as necessary.

The cockroach simulator actually is quite convincing, apparently, in much more than momentary responses. Of course, it doesn't reproduce literally (IIRC it sort of 'touches base' when stimulated to go for it), so some of its behavior is a stand-in. But my point took a leap anyway, that this success implies that much grander ambitions are possible. What triggered me to bring it up was the thought that if sentience arises solely from the material brain and its functions, then eventually we can work out what thingie does what to recreate it, and maybe even in a much simpler way than the real thing but still with 95% of its functionality.

 
At 8:43 PM, Blogger Tim Walters said...

No, you're right about what sentience means. I was muddling "sentience" and "human-level intelligence," probably because I tend to think you can't have the latter without the former (although I'm starting to be less certain of that).

I think sentience is actually a computational shortcut, at least when dealing with other intelligent beings. Instead of trying to derive what they do from first principles, you can ask "what would I do?" So I tend to think that it's easiest to create artificial intelligence via artificial sentience.

Certainly when we start talking about uploading humans, there's not much point if the resulting machine intelligences aren't sentient.

Sounds as if I'm out of date on the artificial cockroach front. I'll have to check in with Rodney Brooks and Hans Moravec.

I agree that reverse-engineering the human brain is the way to go. I'm hoping Ray Kurzweil's optimistic forecast (~2030) is right.

 
At 7:43 AM, Blogger Todd T said...

Ah - very interesting point about sentience being useful as a shortcut.

The use of uploading unaware copies of humans would be to preserve their memories, knowledge and perhaps their associative links, their so-called human capital.

 

Post a Comment

<< Home