Saturday, November 29, 2008

on wiring stuff into AGI systems

On Fri, Nov 28, 2008 at 9:21 PM, Abram Demski wrote:
>
> Ben,
>
> If I were running an AI company I might not be down-to-earth enough to
> admit the necessity of so much preprogrammed bias... I'd want the
> thing to learn about physics on its own :).
>
> --Abram

Actually, if you were running an AI **company**, you might feel huge commercial pressure to hard-wire as much behavior as possible into your AI system, so as to enable it to achieve commercially valuable functionality as quickly as possible

The same might hold if you were running a grant-funded AI project within academia: getting your grant renewed would depend on your system's functionality at the end of a 1-3 year period, and the quickest way to get impressive observable functionality is to hard-wire a bunch of stuff

;-)

Of course, as a mathematician, I feel the pull toward pure learning systems that have a minimal set of innate biases and built-in structures.

And yet, as a scientist, I observe the human brain and see that this is **not** what it is. The brain has *so much* special-purpose built-in structure, and its general intelligence is a fairly small layer built atop a bunch of special-purpose built-in structure.

Logically, it would seem the computational resource requirements for creating a "pure learning system" would be far greater than for creating a system with a conceptual architecture more like the brain. So then, which approach is going to get to the end-goal earlier along the Moore's Law curve?

ben

No comments: