gloomy-optimist
Used to live here
- MBTI
- INxJ
- Enneagram
- 4w3
So, in my colloquium class we're discussing dangerous ideas. The topic came up of artificial intelligence and life -- here's a few things to question about:
Do you think that humanity is "special?" Do we have a purpose, or are we just accidents along the evolutionary chain, a "biological phenomenon?" More importantly, if we are just accidents, what effect would that knowledge have on mankind? Do you think we're better off considering ourselves to be something greater than we really are?
Is man comparable to a machine?
Could our rationality really be recreated in a machine? Would it be possible to create a sense of not only rationality, but also human emotion and spirituality? What effects would this have on mankind? Would it be constructive or destructive? Should we attempt to undertake this, even if we can foresee adverse effects?
These are just possible questions and speculations -- I want to hear input on as many different prospectives as possible. What do you think?
Here are the articles we've read this week pertaining to this topic, if you want to know what sparked these questions or are just interested:
(The article on Transhumanism by Fukuyama) http://www.mywire.com/a/ForeignPolicy/Worlds-Most-Dangerous-Ideas/564801?page=4
http://goliath.ecnext.com/coms2/gi_0199-2102480/Programming-the-post-human-computer.html
Do you think that humanity is "special?" Do we have a purpose, or are we just accidents along the evolutionary chain, a "biological phenomenon?" More importantly, if we are just accidents, what effect would that knowledge have on mankind? Do you think we're better off considering ourselves to be something greater than we really are?
Is man comparable to a machine?
Could our rationality really be recreated in a machine? Would it be possible to create a sense of not only rationality, but also human emotion and spirituality? What effects would this have on mankind? Would it be constructive or destructive? Should we attempt to undertake this, even if we can foresee adverse effects?
These are just possible questions and speculations -- I want to hear input on as many different prospectives as possible. What do you think?
Here are the articles we've read this week pertaining to this topic, if you want to know what sparked these questions or are just interested:
(The article on Transhumanism by Fukuyama) http://www.mywire.com/a/ForeignPolicy/Worlds-Most-Dangerous-Ideas/564801?page=4
http://goliath.ecnext.com/coms2/gi_0199-2102480/Programming-the-post-human-computer.html
Last edited: