[Home]TheSingularity/Talk

HomePage | TheSingularity | Recent Changes | Preferences

Showing revision 17
Comment by Jimbo Wales: I think that this is likely to happen within my own natural lifespan. I'm 34 now, and so in 2031, I will be 54. If I take care of myself, I should be able to live to 74-104, even given today's technology.

It strikes me as a virtual certainty that we will have massively cheap machine intelligence by then. And that's going to be, for better or worse, the most amazing thing that has ever happened on Earth.


While we're firing off personal opinions. It's an open question exactly how powerful a computer one would need to support consciousness (heck, that possibility is still somewhat open). A friend of mine, though, suggested this possibility: you need basically as many connections as an ape brain has. In our vacuum datarum I would go with this, since it's obvious and fits with consciousness arising evolutionarily when it did.

In that case, we have a long ways to go, since most of our computers are about as powerful as worms. Good at different things, of course, but I don't see them helping too much. So don't wait up...well, I guess you should if you can, but that's a different matter. :)

Btw, never extrapolate trends too far! Technology can't surpass physical limits, which strongly seem to exist. So its growth should be logistic rather than exponential, and we're far enough away from the limits we know of that it makes sense we can't yet tell the two apart. -- JoshuaGrosse


Oh, I agree with what you say. This is just a fun idea more than anything else.

There is a fairly respectable argument that the processing power of the human brain is something on the order of 100 million million to 1 billion million instructions per second. IBM is currently planning to have completed by 2005, a computer which will handle 1 billion million instructions per second. Of course, such an expensive ($100 million) computer will not be used for something as silly as approximating human intelligence -- it will be used to work on problems in protein folding, as I understand it. But, let's suppose that 20 years from now, a computer that powerful is cheaply available.

Given the history of computers so far, and given the technological advances that seem to be "in the pipeline", it doesn't seem totally outrageous to suggest 2021 as a date for that.

But, tack on an extra 30 years to be safe. So, will we have it by 2051? --Jimbo Wales

Oh, when I say processing power I'm not referring to speed. A fast calculator is still a calculator. I'm referring to the number of internal connections the system has. Our brains have a lot of neurons connected in very complex ways, so consciousness can emerge relatively easily. Present computers have relatively few gates hooked up to each other simplistically, so there's no such room. And they're not getting that much better yet.

Half a century is a long time, of course, so this is all speculation. But fun ideas can be grounded in reality, and if you learn something thinking about them it's all to the good. -- JoshuaGrosse

The number of hardware connections isn't what's important. AI will never emerge from raw hardware and nobody expects it to. What matters is the number of connections in a software, which can greatly surpass those in hardware. Silicon has a raw speed at least three orders of magnitude higher than meat. This speed can be harnessed by multi-tasking to create a much larger neural net than a straight-forward implementation would allow. In any case, multi-tasking can be done at the hardware level with FPGAs. -- RichardKulisz

Software connectivity is still way below what it needs to be, though, and again I don't think we've been making geometric strides on that front. But you're right, it is a software problem, so gates aren't especially relevant. There isn't nearly as convenient a name for the basic software structural unit, is there?

Afraid not. The basic structural unit depends on how you construct the AI. It can be neurons in a network or frames or whatever.


If technological progress has been following an exponential curve for a very long time, then there will come a point at which technological progress accelerates essentially to infinity.
I don't follow the logic in this. The two clauses (either side of the comma) are written as if the former implies the latter, which it does not. Can someone explain please? Gareth Owen

I think the only sense that can be made of it is to take the phrasing essentially to infinity metaphorically.


I and Robin Hansen are two of the somewhat rare Extropians who are skeptical about the Singularity. My personal belief is that while we will create machine intelligence soon, and it will certainly surpass human intelligence at some point and begin creating new intelligent machines, I don't believe this will cause a positive-feedback loop because I don't believe that intelligent beings by themselves invent and create things--only whole economies invent things, often aided by individual beings, who certainly deserve some accolades, but whose contributions are actually quite small in the big picture. --Lee Daniel Crocker

I suppose the question is, looking at it from an economics point of view, how cheap is machine intelligence compared to human intelligence? Right now, human intelligence is far cheaper by almost any measure. But suppose Intel could have the equivalent of 1 Einstein for $1,000. Wouldn't it make sense for them to quickly whip out 1 million of them, for $1 billion? What could 1 million Einstein's do? Even if the individuals among them only contributed incremental gains, the total would be stupendous.

But my contention is that there are only so many incremental improvements that can be made to the current state of technology, and even if we had enough Einsteins to find all of them, they only consititute progress when they build on each other, and that requires communication and organization--and that takes time. I don't argue that the growth will not happen, I argue that a technological economy has an ideal size, much like a business does, beyond which it becomes inefficient, and that the rate of growth will level off before it reaches a singularity.


Am i the only one who thinks we might also be headed for a social singularity rather than a technological one? Or rather that if we have the technological one without the social one it might be a Bad Thing? --JohnAbbe, fond of e.g. Pierre Teilhard de Chardin


HomePage | TheSingularity | Recent Changes | Preferences
This page is read-only | View other revisions | View current revision
Edited September 24, 2001 11:03 pm by 66.81.45.xxx (diff)
Search: