AI is not as scary as you think | INFJ Forum

AI is not as scary as you think

wolly.green

Permanent Fixture
Jul 20, 2016
1,067
2,718
1,236
MBTI
ENTP
Enneagram
4w5
Hi everyone,

I posted these videos almost 3 years ago. I want to get your feedback.

When I made them, I didn't think to get feedback, but now that's changed. Watch each video in sequence and fast forward where I repeat stuff.

Please tell me what you think. @Ren I especially want to know what you think. Thanks.

https://www.youtube.com/playlist?list=PL2L2mmP4Q3Zhzbru_buXTpbDzqISUTTG8
 
  • Like
Reactions: Asa and Ren
Hi everyone,

I posted these videos almost 3 years ago. I want to get your feedback.

When I made them, I didn't think to get feedback, but now that's changed. Watch each video in sequence and fast forward where I repeat stuff.

Please tell me what you think. @Ren I especially want to know what you think. Thanks.

https://www.youtube.com/playlist?list=PL2L2mmP4Q3Zhzbru_buXTpbDzqISUTTG8

Sounds great man, I'll have a listen and get back to you with my thoughts.
 
  • Like
Reactions: wolly.green
Sure something to play with or use in a dumbed state but to open Pandora's box much less to integrate with it is not a great idea, not sure why this is a flame for which some moths are attracted to but many are too aware of the risks.
 
  • Like
Reactions: Ren
Sure something to play with or use in a dumbed state but to open Pandora's box much less to integrate with it is not a great idea, not sure why this is a flame for which some moths are attracted to but many are too aware of the risks.

Is this a riddle? lol
 
  • Like
Reactions: slant and Wyote
Hi everyone,

I posted these videos almost 3 years ago. I want to get your feedback.

When I made them, I didn't think to get feedback, but now that's changed. Watch each video in sequence and fast forward where I repeat stuff.

Please tell me what you think. @Ren I especially want to know what you think. Thanks.

https://www.youtube.com/playlist?list=PL2L2mmP4Q3Zhzbru_buXTpbDzqISUTTG8

Make sense to me. But I've heard you talk about this stuff before, so its easy for me to follow. I'm not sure it would be very palatable for people that are not familiar with these concepts.

Have you considered reading it yourself. Or paying someone to read it... Personally, I prefer reading than listening.
 
  • Like
Reactions: wolly.green
Sure something to play with or use in a dumbed state but to open Pandora's box much less to integrate with it is not a great idea, not sure why this is a flame for which some moths are attracted to but many are too aware of the risks.

I'm not really sure that AI is a "pandora's box". The fear mongering is really common, but a lot of it is very poorly thought out.
 
  • Like
Reactions: wolly.green
I'm not really sure that AI is a "pandora's box". The fear mongering is really common, but a lot of it is very poorly thought out.

I don't subscribe to fear even that the vast majority of humanity does that said I does make me wonder how some can't see the risks but then again the genie was let out of the bottle when it came to a lot of other things like GMO foods ect. I guess humanity must learn the hard way as always, perhaps when they've integrated they loose things like emotions and free will then wonder as to why a big part of themselves is missing.
 
Make sense to me. But I've heard you talk about this stuff before, so its easy for me to follow. I'm not sure it would be very palatable for people that are not familiar with these concepts.

Have you considered reading it yourself. Or paying someone to read it... Personally, I prefer reading than listening.

Right. Thanks for the input.
 
  • Like
Reactions: Ren
"Being smart is not the same as wanting something". Pinker got it right, I think.

As far as I can tell (and based on what you say in your videos) there is no sufficient reason to assume that:

a) AI will develop a semantics
b) AI will develop reasons for action and actually act on the basis of those reasons.

Searle is good at articulating a), while people like Chomsky (and apparently Pinker) are good at articulating b).

Now, in the absence of both a) and b), or even in the absence of either one, the grounds are insufficient to consider AI as scary or spelling the possible end of the human race. There is no sufficient reason that AI would 1) articulate the desire to erase humanity; 2) be able to act upon that desire; 3) have the resources to do so. The need to correct for error is also related to 3).
 
  • Like
Reactions: java
In terms of your videos as presentations on a particular topic:

I think you have all the right content (which is a great start) to make your point, but there are a number of elements that can distract from it. The most obvious one is the use of the simulated voice. It would be much better if you were making those points yourself and we saw your expressions etc.

The second is that I think you spend too much time expanding upon examples and not enough time organising your claims (especially in the last video), so the listener may get a little lost, especially if they don't have prior familiarity with the issues at hand. I think that one concise example per point is good enough; you don't need to have multiple examples every time. Just make sure that e.g. every two minutes the reader is reminded of the point that you are making.
 
For me as someone who cannot speak and understand English so well, these videos are incomprehensible.
I would rather be able to read the spoken text in parallel.
Or have it as real text, which I could then have translated.
For this reason I did not watch the videos, but switched them off after 5 seconds.
 
For me as someone who cannot speak and understand English so well, these videos are incomprehensible.
I would rather be able to read the spoken text in parallel.
Or have it as real text, which I could then have translated.
For this reason I did not watch the videos, but switched them off after 5 seconds.

 
Here are my general takes on the matter:

- whether or not AI will 'really' be conscious or cross from the syntactic to the semantic seems to me independent of threats they pose -- that is, whether or not critics of functionalist portrayals are right, functionalism covers how the AI behave, and if that behavior is catastrophic (even if metaphysically no more interesting than a tsunami being catastrophic), that's enough to worry

- That said, I do tend to think there's reason to be skeptical AI will be scary in the 'terminator' sense that there's no reason to think they'll even behave as if they are acting on emotions like anger and so on, or for that matter, really do anything harmful at all/they may just be great at figuring stuff out but not really have 'motives' that could diverge from our own.

- there is of course the threat that, even if intended to be designed without 'motives' or 'emotion' (put in quotes because one might depending on one's views rule those out in their truer senses but still agree a behavioral/functional isomorph of those is coherent), perhaps some foolish or evil human being will build one with such things.
Well, that's of course technically a danger, but it's also a danger that there are nuclear weapons, and at least so far, it's not like we're seeing nuclear warheads flying in every direction. So there's no reason to hastily think humanity will uncontrollably misuse the powers of AI.
 
AI is completely incapable of anything close to singularity at this point. The bigger concern is surveillance capitalism and that they're grooming us to be comfortable with getting computer chip implants inside of us. It started with wearables tracing all the we do and sending the data to advertisers. Imagine how much more efficient corporations will be when they have access to the actual electrical impulses of your brain. We know with fMRI technology as open to interpretation as it is, they have been able to successfully utilitize neuromarketing to sell products better. This is my concern with AI, and it has less to do with the technology itself as it does with the people behind the technology and a lack of consumer rights in this country.
 
  • Like
Reactions: Sorn