Free speech on social media | Page 2 | INFJ Forum

Free speech on social media

I don't think making laws censoring what people say online is the answer.

Social media companies are private and can make their own rules. As stated above, it is in their best interest to create discord, so the avenues for users to protect themselves are limited.

Yesterday I watched part of a video focused on how TikTok stars are harassed online and are having emotional and mental breakdowns because of the hate they receive. (This is why stars do not read reviews of their work. Critics are toxic.) While it is easy to say it is their own fault for becoming Tiktok stars, people don't "deserve" to be harassed. This situation is an amplification of toxic behaviors that people exhibit online. If social media companies were more responsible, users would be able to choose to filter harassment when someone tried to post a comment containing certain trigger words. Giving the user the control to block trigger words from their comments doesn't interfere with free speech. It would just create individual boundaries for users on their own pages. While you are on their page, you won't be able to comment using those words.

We all know the internet makes it easy for cruel people to act their worst because there are no consequences. It is also human nature for people to have negative feelings about anyone who reaches success, or who has anything they want but lack. (This is why people lose friends when they start to get successful or when they make positive changes in their lives.) There are all kinds of other psychological issues that contribute to the reason people bully or attack others. Add social pressure (both on and offline). Some people think it is "cool" to be mean.


I get so many creepers and so much spam in the comments on my website. I'd like a way to filter those comments. I'd like a way to filter explicit and mean comments from social media, too. I do not want internet censorship.
 
Educating kids on social media literacy and media literacy in general is the key. Not just one semester. Needs to be a required subject, possibly throughout highschool. Maybe even start earlier but get to the meatier aspects in highschool. Teach people to identify misleading sources and propaganda. To see when their emotions are being manipulated by content. So many misinformation articles target the emotions. They pull you in to get you riled up and wanting more. You end up poisoning your mind with it and the tech companies make more money as you then engage more and more with that content. Education is really the only way we get control of this monster. Our minds must be able to parry the barrage of misinformation and propaganda. Ideally that way, engagement with that content drops and it becomes not profitable.

It would be interesting to see it broken down by generation who is more likely to engage with the extremist content and propaganda. I have my opinion on who is susceptible. I'm hopeful younger generations are less susceptible.
 
Last edited:
Social media is a new kind of beast that flips conventional arguments about free speech. Here are a few facts you may not be aware of.

Social media centers profit immensely by amplifying messages of extreme content, which then attract more views and hike up advertising revenue. For example, one week after the Capitol attacks, military gear ads continued to be attached to content on the US elections and the attacks, despite Facebook staff and external watchdogs flagging these instances. [1]

Analyzing over 2 million recommendations and 72 million comments on YouTube in 2019, researchers demonstrated that viewers consistently moved from watching moderate to extremist videos; simulation experiments run on YouTube revealed that its recommendation system steers viewers towards politically extreme content. The study notes "a comprehensive picture of user radicalization on YouTube". [2]

Exposure to a fake political news story can rewire your memories: in a study, where over 3,000 voters were shown fake stories, many voters later not only “remembered” the fake stories as if they were real events but also "remembered" additional, rich details of how and when the events took place. [3]

The order in which search engines present results has a powerful impact on users' political opinions. Experimental studies show that when undecided voters search for information about political candidates, more than 20% will change their opinion based on the ordering of their search results. Few people are aware of bias in search engine results or how their own choice of political candidate changed as a result. [4]

Game theory analysis has shown how a few bots with extreme political views, carefully placed within a network of real people, can have a disproportionate effect within current social media systems. Studies demonstrate how an extremist minority political group can have undue influence using such bots—for example, reversing a 2:1 voter difference to win a majority of the votes. [5]

More fake political headlines were shared on Facebook than real ones during the last 3 months of the 2016 US elections. [6]

The outcomes of elections around the world are being more easily manipulated via social media: during the 2018 Mexican election, 25% of Facebook and Twitter posts were created by bots and trolls; during Ecuador's 2017 elections, president Lenin Moreno's advisors bought tens of thousands of fake followers; China's state-run news agency (Xinhua) has paid for hundreds of thousands of fake followers, tweeting propaganda to the Twitter accounts of Western users. [7]

The 2017 genocide in Myanmar was exacerbated by unmoderated fake news, with only 4 Burmese speakers at Facebook to monitor its 7.3 million Burmese users.[8]

References:

[1] https://www.buzzfeednews.com/article/ryanmac/facebook-profits-military-gear-ads-capitol-riot
[2] https://dl.acm.org/doi/abs/10.1145/3351095.3372879
[3] https://journals.sagepub.com/doi/10.1177/0956797619864887
[4] https://www.pnas.org/content/112/33/E4512
[5] https://www.nature.com/articles/s41586-019-1507-6
[6] https://www.buzzfeednews.com/articl...ction-news-outperformed-real-news-on-facebook
[7]https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html
[8] https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

This is a Trojan horse to anyone who wasn't born yesterday and very typical of a lot of darker personalities out there if all else as the Soviets used to say "useful idiot" where such persons would pave the way for totalitarianism to take hold. Those who do not learn from history are doomed to repeat it and so many these days are not only so but actively wish to repeat the mistakes of the past under new circumstances for which may doom many the world over. The next time around instead of millions it will be billions who end up paying the prices while those who will be responsible either get off easy or never face justice for their actions.
 
Some posts here are so dramatic. Maybe for good reason, but from my perspective, social media just looks like a bunch of boring people who are fiddling around with their phones at every opportunity.

"Users" don't seem to frequent the areas I'm usually around, but I see them in cafes and bus shelters.

Even if they're being brainwashed into unhappy consumerism, they don't look like they'd ever do anything interesting anyhow. There just seems to be a category of people who in different generations are equally boring, but just waste their time on different stuff.

This generation it's social media, the previous two generatione were television addicts. Prior to that, you had music idiots who hung around record stores, juke boxes, and wireless radios. They don't do anything or change anything. They just waste a lot of time and money.
 
  • Like
Reactions: Reason and aeon
I don't think making laws censoring what people say online is the answer.

Social media companies are private and can make their own rules. As stated above, it is in their best interest to create discord, so the avenues for users to protect themselves are limited.

Yesterday I watched part of a video focused on how TikTok stars are harassed online and are having emotional and mental breakdowns because of the hate they receive. (This is why stars do not read reviews of their work. Critics are toxic.) While it is easy to say it is their own fault for becoming Tiktok stars, people don't "deserve" to be harassed. This situation is an amplification of toxic behaviors that people exhibit online. If social media companies were more responsible, users would be able to choose to filter harassment when someone tried to post a comment containing certain trigger words. Giving the user the control to block trigger words from their comments doesn't interfere with free speech. It would just create individual boundaries for users on their own pages. While you are on their page, you won't be able to comment using those words.

We all know the internet makes it easy for cruel people to act their worst because there are no consequences. It is also human nature for people to have negative feelings about anyone who reaches success, or who has anything they want but lack. (This is why people lose friends when they start to get successful or when they make positive changes in their lives.) There are all kinds of other psychological issues that contribute to the reason people bully or attack others. Add social pressure (both on and offline). Some people think it is "cool" to be mean.


I get so many creepers and so much spam in the comments on my website. I'd like a way to filter those comments. I'd like a way to filter explicit and mean comments from social media, too. I do not want internet censorship.

The Centre for Humane Technology have proposed two ways of filtering content without limiting free speech.

One way is to limit the amount of comments a user can post in a 24 hour period. This is similar to what Reddit used to do. Limiting the amount of comments a user can leave will force us to think twice about who we engage with.

A second way is to add friction to instant share functions. When a user decides to share a meme or another form of content, a prompt can be added that asks whether we're sure we want to share. Studies show that when people are asked to reconsider, they are significantly less likely to hit share. This could help to stem the vitality of cruel defamatory content.
 
  • Like
Reactions: Asa
This is a Trojan horse to anyone who wasn't born yesterday and very typical of a lot of darker personalities out there if all else as the Soviets used to say "useful idiot" where such persons would pave the way for totalitarianism to take hold. Those who do not learn from history are doomed to repeat it and so many these days are not only so but actively wish to repeat the mistakes of the past under new circumstances for which may doom many the world over. The next time around instead of millions it will be billions who end up paying the prices while those who will be responsible either get off easy or never face justice for their actions.

I'm sorry what? Social media has already been used to undermine democracy and to wage war on minorities and foreign powers.
 
@wolly.green

That you would invalidate and condescend from the content of a single post, and ask nothing of me so as to clarify, speaks to your passion and zeal, and willingness to run roughshod in pursuit of an ideal.

After all, I have my reasons, I have my stories that inform, and they were, and remain, unknown to you.

Know that you are changing minds, but perhaps not in the manner you might wish to.

Regards,
Ian
 
@wolly.green

That you would invalidate and condescend from the content of a single post, and ask nothing of me so as to clarify, speaks to your passion and zeal, and willingness to run roughshod in pursuit of an ideal.

After all, I have my reasons, I have my stories that inform, and they were, and remain, unknown to you.

Know that you are changing minds, but perhaps not in the manner you might wish to.

Regards,
Ian

I didn't reply to you. And quite deliberately. Replying to you would have taken far too long.
 
I didn't reply to you. And quite deliberately. Replying to you would have taken far too long.

You did reply to me, and then you deleted it. If you would like, I can quote exactly what you said.

Please don’t continue to misrepresent the truth of your actions.

Cheers,
Ian
 
  • Like
Reactions: slant and April
@wolly.green


Have you read any books or heard any lectures by Jaron Lanier? His book "10 arguments to delete your Facebook today" is a pretty good book. There is some political talk in it, he has a left leaning perspective, but I don't think it biases what he is saying. He remains neutral on the facts.

There's also another book, "
Alone Together: Why We Expect More from Technology and Less from Each Other" which talks a lot about how the advent of the internet and social media is keeping people behind screens and how it is easier to be pretty vicious online vs face to face when you have the cues of person's face.

I also have a blog in the blog section that I post a lot about technology and how technology is negatively impacting some social aspects of society.

Overall I feel the issue is similar to cars, that's the metaphor I like to use. Seatbelts weren't standardized in cars until 1968. The first popular car, the model t, came out in 1908. When cars were first invented they were pretty slow so seatbelts weren't probably necessary. But very quickly the technology improved but the safety features weren't implemented until long after the cars had been going fast. The same thing is happening with social media. There's a tipping point where enough people are educated on a subject that then safety features are implemented. In this case for social media, awareness of overuse itself might be a safety feature in itself. You can program your phone now to alert you when you've spent a certain amount of time on it, and to block access to apps.

These changes *must* be consumer driven though. We have to teach ourselves how to use social media responsibly.

Laniers idea on boycotting the social media sites is that in his opinion that's the only way to really get the companies to change. Economic pressure. He also proposes a future model where the "freemiun" model where we have free access to websites by letting them steal our data and sell it, we instead create union type organizations that store our data for us and we negotiate with them who to sell it to and for how much and we directly get the profit for our data. It would probably be pennies but the point is we have to get our data back under our control, it might not be possible to stop our data from being collected or sold but we can take ownership of it.
 
You did reply to me, and then you deleted it. If you would like, I can quote exactly what you said.

Please don’t continue to misrepresent the truth of your actions.

Okay look here mate, that was never meant to be sent. Yes I started, but gave up because of the effort.

Also, I know you're an admin, but threatening me is so uncouth.
 
@wolly.green


Have you read any books or heard any lectures by Jaron Lanier? His book "10 arguments to delete your Facebook today" is a pretty good book. There is some political talk in it, he has a left leaning perspective, but I don't think it biases what he is saying. He remains neutral on the facts.

There's also another book, "
Alone Together: Why We Expect More from Technology and Less from Each Other" which talks a lot about how the advent of the internet and social media is keeping people behind screens and how it is easier to be pretty vicious online vs face to face when you have the cues of person's face.

I also have a blog in the blog section that I post a lot about technology and how technology is negatively impacting some social aspects of society.

Overall I feel the issue is similar to cars, that's the metaphor I like to use. Seatbelts weren't standardized in cars until 1968. The first popular car, the model t, came out in 1908. When cars were first invented they were pretty slow so seatbelts weren't probably necessary. But very quickly the technology improved but the safety features weren't implemented until long after the cars had been going fast. The same thing is happening with social media. There's a tipping point where enough people are educated on a subject that then safety features are implemented. In this case for social media, awareness of overuse itself might be a safety feature in itself. You can program your phone now to alert you when you've spent a certain amount of time on it, and to block access to apps.

These changes *must* be consumer driven though. We have to teach ourselves how to use social media responsibly.

Laniers idea on boycotting the social media sites is that in his opinion that's the only way to really get the companies to change. Economic pressure. He also proposes a future model where the "freemiun" model where we have free access to websites by letting them steal our data and sell it, we instead create union type organizations that store our data for us and we negotiate with them who to sell it to and for how much and we directly get the profit for our data. It would probably be pennies but the point is we have to get our data back under our control, it might not be possible to stop our data from being collected or sold but we can take ownership of it.

Yup I read his book. It's okay, but not nearly as detailed as it could have been.

Some of his references are extremely insightful though. For example, did you ever take a look at the podcast 'Your Undivided Attention'. They bring on experts from around the world to talk about different aspects of social media and the internet. I found it more insightful than Laniers book.

Lanier also rarely ever talks about how influential technologies actually work. Which is fine, I suppose, if you're only interested in its consequences and not the details around why those consequences emerge to begin with.
 
Yup I read his book. It's okay, but not nearly as detailed as it could have been.

Some of his references are extremely insightful though. For example, did you ever take a look at the podcast 'Your Undivided Attention'. They bring on experts from around the world to talk about different aspects of social media and the internet. I found it more insightful than Laniers book.

Lanier also rarely ever talks about how influential technologies actually work. Which is fine, I suppose, if you're only interested in its consequences and not the details around why those consequences emerge to begin with.
I liked it :) ! Shame you didn't. To each their own I guess
 
Okay look here mate, that was never meant to be sent. Yes I started, but gave up because of the effort.

Thank you for owning what is yours.

Also, I know you're an admin, but threatening me is so uncouth.

I requested something of you, and I said please. ¯ \ _ (ツ) _ / ¯

Whatever,
Ian
 
I think the most interesting thing is that people think there is a drastic correlation between online and in-person. Would we have half the conversations we have online in real life? probably not. People take themselves too seriously online, they act like it means something. If your taking the game too seriously there's an off button on your phone/computer. It seems like the more dangerous thing is to take the online persona seriously, sure we have some great conversation on here but does it matter "who" had them, you still experienced them so just don't take it so seriously, it's only big business that take it seriously because there's $$ to be made and people to be manipulated. So along the lines of @Roses In The Vineyard 's view, why would you bother censoring them and just don't let your kid use social media till like 14 y/o or something. Simple solution.