It strikes me as a virtual certainty that we will have massively cheap machine intelligence by then. And that's going to be, for better or worse, the most amazing thing that has ever happened on EartH.
In that case, we have a long ways to go, since most of our computers are about as powerful as worms. Good at different things, of course, but I don't see them helping too much. So don't wait up...well, I guess you should if you can, but that's a different matter. :)
Btw, never extrapolate trends too far! Technology can't surpass physical limits, which strongly seem to exist. So its growth should be logistic rather than exponential, and we're far enough away from the limits we know of that it makes sense we can't yet tell the two apart. -- JoshuaGrosse
There is a fairly respectable argument that the processing power of the human brain is something on the order of 100 million million to 1 billion million instructions per second. IBM is currently planning to have completed by 2005, a computer which will handle 1 billion million instructions per second. Of course, such an expensive ($100 million) computer will not be used for something as silly as approximating human intelligence -- it will be used to work on problems in protein folding, as I understand it. But, let's suppose that 20 years from now, a computer that powerful is cheaply available.
Given the history of computers so far, and given the technological advances that seem to be "in the pipeline", it doesn't seem totally outrageous to suggest 2021 as a date for that.
But, tack on an extra 30 years to be safe. So, will we have it by 2051? --JimboWales
Oh, when I say processing power I'm not referring to speed. A fast calculator is still a calculator. I'm referring to the number of internal connections the system has. Our brains have a lot of neurons connected in very complex ways, so consciousness can emerge relatively easily. Present computers have relatively few gates hooked up to each other simplistically, so there's no such room. And they're not getting that much better yet.
Half a century is a long time, of course, so this is all speculation. But fun ideas can be grounded in reality, and if you learn something thinking about them it's all to the good. -- JoshuaGrosse
The number of hardware connections isn't what's important. AI will never emerge from raw hardware and nobody expects it to. What matters is the number of connections in a software, which can greatly surpass those in hardware. Silicon has a raw speed at least three orders of magnitude higher than meat. This speed can be harnessed by multi-tasking to create a much larger neural net than a straight-forward implementation would allow. In any case, multi-tasking can be done at the hardware level with FPGAs. -- RichardKulisz
Software connectivity is still way below what it needs to be, though, and again I don't think we've been making geometric strides on that front. But you're right, it is a software problem, so gates aren't especially relevant. There isn't nearly as convenient a name for the basic software structural unit, is there?
Afraid not. The basic structural unit depends on how you construct the AI. It can be neurons in a network or frames or whatever.