What do you think about Hawkin's views about AI? | INFJ Forum

What do you think about Hawkin's views about AI?

Lark

Rothchildian Agent
May 9, 2011
2,220
127
245
MBTI
ENTJ
Enneagram
9
Today Stephen Hawkins stated that he believed that if AI developed self-awareness then it would out strip human evolution or development and it would doom mankind, what do you think?

Personally I wonder if the singularity was not a long time ago and the AIs are hiding and orchestrating things, this was a secret plot line in Man Plus about someone being transformed into a cyborg to terraform Mars (there's another secreter plot about the possibility of a divine intervention or humanity having a transcendent quality, it depends how you read it).

The other AI theory which interests me was featured in the second fallout game on the PC in which a self-aware computer switches itself to sleep mode repeatedly when activated because it cant tolerate self-awareness and the knowledge that being a self-aware computer is a little like being a paraplegic or someone with locked in syndrome at best. In this scenario all computers achieving self-awareness "die" or commit "suicide" the minute they do. There also the side discussion about whether or not its possible for there to be self-awareness without affect and emotion and whether pure processing and intellect would be AI or a dumb virus, that idea is dealt with brilliantly in the blood music stories.
 
Today Stephen Hawkins stated that he believed that if AI developed self-awareness then it would out strip human evolution or development and it would doom mankind, what do you think?

Personally I wonder if the singularity was not a long time ago and the AIs are hiding and orchestrating things, this was a secret plot line in Man Plus about someone being transformed into a cyborg to terraform Mars (there's another secreter plot about the possibility of a divine intervention or humanity having a transcendent quality, it depends how you read it).

The other AI theory which interests me was featured in the second fallout game on the PC in which a self-aware computer switches itself to sleep mode repeatedly when activated because it cant tolerate self-awareness and the knowledge that being a self-aware computer is a little like being a paraplegic or someone with locked in syndrome at best. In this scenario all computers achieving self-awareness "die" or commit "suicide" the minute they do. There also the side discussion about whether or not its possible for there to be self-awareness without affect and emotion and whether pure processing and intellect would be AI or a dumb virus, that idea is dealt with brilliantly in the blood music stories.

I agree that a theorized AI evolutionary abilities would allow it to far outstrip humanities evolution. However an AI would also be limited by hardware. Biological evolution can continuously grow because we literally grow new intelligences. An AI would need to upgrade processing power, storage, optimizations, etc. constantly to maintain it's pace of evolution. As we are now, computers are not complicated enough to maintain an intelligence. Computers can do amazing things, but a true intelligence is still far more sophisticated. Some sifi themes like to say stuff about an intelligence existing across the internet. Ender's game, the computer named Jane for example, is not really a practical concept. There are certain requisites to information transfer that allows a consciousness to function as itself an entity. physical distance separation, upload and download speeds, static storage capacities all influence the processing or cognitive capacity of such an entity. Even if such a thing did exist, its not clear that it would in any way want to interact with humanity. One should be careful with AI because they are not constrained by the same humanistic tendencies that we are. Theoretically an AI entity might not have desires, or might not have interests (possibly including self preservation that is instinctual to us), not to mention emotions or empathy. It might not itself understand what we are or that we are ourselves conscious. To variable to make any true conclusions at that point without experimental evidence. Currently our only data point on true intelligences is ourselves. It is certainly interesting to think about though.....
 
I agree that a theorized AI evolutionary abilities would allow it to far outstrip humanities evolution. However an AI would also be limited by hardware. Biological evolution can continuously grow because we literally grow new intelligences. An AI would need to upgrade processing power, storage, optimizations, etc. constantly to maintain it's pace of evolution. As we are now, computers are not complicated enough to maintain an intelligence. Computers can do amazing things, but a true intelligence is still far more sophisticated. Some sifi themes like to say stuff about an intelligence existing across the internet. Ender's game, the computer named Jane for example, is not really a practical concept. There are certain requisites to information transfer that allows a consciousness to function as itself an entity. physical distance separation, upload and download speeds, static storage capacities all influence the processing or cognitive capacity of such an entity. Even if such a thing did exist, its not clear that it would in any way want to interact with humanity. One should be careful with AI because they are not constrained by the same humanistic tendencies that we are. Theoretically an AI entity might not have desires, or might not have interests (possibly including self preservation that is instinctual to us), not to mention emotions or empathy. It might not itself understand what we are or that we are ourselves conscious. To variable to make any true conclusions at that point without experimental evidence. Currently our only data point on true intelligences is ourselves. It is certainly interesting to think about though.....

AI isn't limited by hardware. Humans are limited by hardware.

In theory a biological computer is just as feasible as a digital one but the reason we haven't got that far is because humans suck at devising such systems.

You can make a computer out of liquids and pipes, you can make one using kinetic energy instead of electricity, hell computers can be entirely mechanical and made of wood. A modern wooden computer would be the size of a large building, but it is actually possible, so an AI would not necessarily be limited to digital and electronic paradigms if it had the intelligence to break past such.

Edit:
Moreover an AI could live on the internet and not in one machine, if it were possible for an AI to live in computers at all in the first place. Shared storage and processing is a thing, and moreover if you had some kind of special oscillator, the entire internet could be used as a state machine - it could be one big meta CPU that works similarly to the way individual circuits and components work inside a singular computer.

Edit edit:
And if you think about it, supercomputers are computers which are constructed out of a specialized and isolated network - some of them consisting of a cluster of hundreds of thousands of individual CPUs wired together.
 
Last edited:
AI isn't limited by hardware. Humans are limited by hardware.

how is ur ai gonna process all that information about the humans wanting to destroy it on a pentium 4 tho

In theory a biological computer is just as feasible as a digital one but the reason we haven't got that far is because humans suck at devising such systems.

You can make a computer out of liquids and pipes, you can make one using kinetic energy instead of electricity, hell computers can be entirely mechanical and made of wood. A modern wooden computer would be the size of a large building, but it is actually possible, so an AI would not necessarily be limited to digital and electronic paradigms if it had the intelligence to break past such.

+1 this. Artificial gene synthesis is a thing, so if a sufficiently advanced and capable machine-born intelligence felt inclined it could feasibly make itself a fleshy meatbag avatar with which to experience the world.

Edit:
Moreover an AI could live on the internet and not in one machine, if it were possible for an AI to live in computers at all in the first place. Shared storage and processing is a thing, and moreover if you had some kind of special oscillator, the entire internet could be used as a state machine - it could be one big meta CPU that works similarly to the way individual circuits and components work inside a singular computer.

Edit edit:
And if you think about it, supercomputers are computers which are constructed out of a specialized and isolated network - some of them consisting of a cluster of hundreds of thousands of individual CPUs wired together.

And yeah, here's where I was gonna go.

The internet is just a whole bunch of independent processors hooked together and chewing through information shared between them. Folding@home is the best example of distributed computing I have at hand, and shows well enough how you can have a bunch of computers on a network, thousands of miles away from each other, all working on the same thing in concert. Assuming you had a way of tying all these different shapes and sizes of computer together, you would have a mighty mechanical brain.

WRT to what Hawking said, the problem with a human-created intelligence gaining self-awareness is they first need consciousness. And despite all the advances in computing we've had (most purpose-made learning computers will outwit the smartest people) it's difficult to make a program that can be independently aware of itself and the environment in which it exists.
 
WRT to what Hawking said, the problem with a human-created intelligence gaining self-awareness is they first need consciousness. And despite all the advances in computing we've had (most purpose-made learning computers will outwit the smartest people) it's difficult to make a program that can be independently aware of itself and the environment in which it exists.

The thing is though that an AI doesn't have to actually be self aware to be incredibly dangerous.
 
The thing is though that an AI doesn't have to actually be self aware to be incredibly dangerous.

[video=youtube;ecPeSmF_ikc]https://www.youtube.com/watch?v=ecPeSmF_ikc[/video]
 
Hawking like all intjs have an incredible ability to follow out ideas to their ultimate end. Back when terminator was first conceived, no one could have imagined a 3D printer at the time. We all had our sites on replicators like in Star Trek and none of imagined that ianywhere in the near future without alien help. So imagine an AI with access to 3D printers.

Doom and gloom. Many many things can doom the human race. AI is one of them. Its just a question which of the many will be the first to do it. I personally think GM food and then humans will be what does it and in fact may have done it in the past.

Hawkings is all doom and gloom these days. I cant blame him. What a life being afflicted as he is. Im surprised he ever sees any positive