• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Solution proposal for the facebook propaganda problem

I've heard of reddit. Followed a link to what has been posted there. That's about it as far as knowing anything about it.

The gang censorship thing would be a problem, and would make the idea useless. It would simply result in ideological censorship.

Look how people "gang tackle" people on this site. If the idea was applied here, the only people on this site would be liberals.

The slippery slope of censorship is too dangerous to enter in to.

The fact we're already on that slipper slope should be a great concern to everyone.

What happens when that censorship does a 180, and comes after people who share thoughts and ideas that used to be ok, but are now considered unacceptable?

What then?

I can go into the mechanics of it if you are open to that. I think the system effectively combats that concern (but there is always a better design possible). Reddit would provide some examples why (and what they do mostly right, but also what they get wrong on that particular issue you raised). However, essentially what you are raising is a solved issue.
 
The bot idea seems interesting. Free speech is a human construct. A bot isn't human.

The rest I can't agree with. It opens the door to gang censorship.

At the end of the day it's all about taking personal responsibility, and applying objective thinking.

Bots are a human controlled and human created construct
 
I can go into the mechanics of it if you are open to that. I think the system effectively combats that concern (but there is always a better design possible). Reddit would provide some examples why (and what they do mostly right, but also what they get wrong on that particular issue you raised). However, essentially what you are raising is a solved issue.

Doesn't Reddit use a thumbs up, or some kind or rating system like that? I could be wrong, but that comes to mind. Again, I don't know much about Reddit.
 
Bots are a human controlled and human created construct

I guess. I don't know much about bots and how they work. Aren't bots a software based thing that performs automated tasks.

I imagine it to be like a mass mailing, when it comes to posts.

That isn't really human.

Everything has a human touch after all. But live, and interactive is different than what a bot would do, don't you think?
 
Doesn't Reddit use a thumbs up, or some kind or rating system like that? I could be wrong, but that comes to mind. Again, I don't know much about Reddit.

Pretty much, but they also tag things by category, whether system or user tags, so people can get petty precise and sophisticated if they are so inclined. From a censorship perspective, this people in charge which is what we do already, who doesn't walk away from the crazy guy with the placard, for example?

The flaw in reddit is that there is a strong content wall. If you downvote something enough, you pretty much stop seeing it. I would weaken that wall so you always see content that you may disagree with but is strongly agreed on by other groups.

So there are two systems in place here
1. You can defined your own tags and voting, so if you want unpopular content, vote for those tags and since users are defining the tags (and not the system, there will always be some new tag to find)
2. content leakage which breaks echo chambers.
 
I guess. I don't know much about bots and how they work. Aren't bots a software based thing that performs automated tasks.

I imagine it to be like a mass mailing, when it comes to posts.

That isn't really human.

Everything has a human touch after all. But live, and interactive is different than what a bot would do, don't you think?

In this case, the bot would find the users, write the content, proof read it, and then mail it out without human intervention. They have become quite sophisticated.

This A.I. Bot Can Convincingly '''Write''' Entire Articles. It'''s So Dangerously Good, the Creators Are Scared to Release It | Inc.com
 
I guess. I don't know much about bots and how they work. Aren't bots a software based thing that performs automated tasks.

I imagine it to be like a mass mailing, when it comes to posts.

That isn't really human.

Everything has a human touch after all. But live, and interactive is different than what a bot would do, don't you think?

You aren't wrong in your assessment, but you are excluding the problem that what those bots hunt for and where they focus that hunt are determined by the human who creates the program(period) They are just a tool like any other software, designed by a human to complete a specific task under the guidelines that human puts into them(period)

You are automating a task, not removing human control of it
 
You are still not addressing my point - which is simply that once FB starts editing (or censoring) their user's posts for idelogical, "factual" or socially acceptable content then they have, in fact, become simply another publisher and thus should no longer enjoy goverrnment protection as an open platform.
You are still not following my several points on this topic. No matter. You are free to believe whatever you like such as the Russians, AntiFA, White Supremacists, whatever can post whatever they like on FB or any other Social Media platform as a “right”.
 
In this case, the bot would find the users, write the content, proof read it, and then mail it out without human intervention. They have become quite sophisticated.

This A.I. Bot Can Convincingly '''Write''' Entire Articles. It'''s So Dangerously Good, the Creators Are Scared to Release It | Inc.com

Forgive that I don't have links handy, but a google search will give you some disturbing information -

Bots at amazon denied women applicants, at other companies they have been shown to be racist, at others they simply filtered negatives from the ones they thought would fit the company needs - they were and always will be flawed as the guidelines are provided by humans
 
So that would mean we would still be subject to flat earth or antivaxxor information sometimes, but reasonable people can see that for what it is.

I'm not going to wade through 11 pages of posts to see if this has been addressed, though I highly suspect it hasn't.

But tell me...if you think reasonable people, subjected to flat earth or antivaxxor information, will be able to see that for what it is, why don't you think reasonable people will be able to see political manipulation for what it is? Is there something different about political nonsense that makes it impossible to resist, while flat earth nonsense is no problem for them to deal with?

But tell me...who do you propose we use to make companies like FB, et al, implement these solutions? These are private companies. Do you want Congress to pass laws to control them? If so, I'm thinking the 1st Amendment will cause those laws to be tossed in the trash.

Amendment I

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.​
 
Forgive that I don't have links handy, but a google search will give you some disturbing information -

Bots at amazon denied women applicants, at other companies they have been shown to be racist, at others they simply filtered negatives from the ones they thought would fit the company needs - they were and always will be flawed as the guidelines are provided by humans

Exactly, bots are only as good as the training data, then once they are sufficiently trained (by whatever evaluation process), then once they do their thing, they are a force amplifier. One of the problems I am concerned about is that the human brain (to put it in computer terms) has only so much i/o (eyes, ears) and processing power and is increasingly unable to compete against computer networks that can scale much higher and are also self modifying (where the human brain is kind of stuck in the evolutionary sense, taking much longer to update).

We will be out competed if we are not sensible and this moves so fast, that we need to think about this kind of stuff today.
 
Pretty much, but they also tag things by category, whether system or user tags, so people can get petty precise and sophisticated if they are so inclined. From a censorship perspective, this people in charge which is what we do already, who doesn't walk away from the crazy guy with the placard, for example?

The flaw in reddit is that there is a strong content wall. If you downvote something enough, you pretty much stop seeing it. I would weaken that wall so you always see content that you may disagree with but is strongly agreed on by other groups.

So there are two systems in place here
1. You can defined your own tags and voting, so if you want unpopular content, vote for those tags and since users are defining the tags (and not the system, there will always be some new tag to find)
2. content leakage which breaks echo chambers.

Isn't this just as susceptible to gang censorship as any other "popular vote" methodology?
 
I'm not going to wade through 11 pages of posts to see if this has been addressed, though I highly suspect it hasn't.

But tell me...if you think reasonable people, subjected to flat earth or antivaxxor information, will be able to see that for what it is, why don't you think reasonable people will be able to see political manipulation for what it is? Is there something different about political nonsense that makes it impossible to resist, while flat earth nonsense is no problem for them to deal with?

But tell me...who do you propose we use to make companies like FB, et al, implement these solutions? These are private companies. Do you want Congress to pass laws to control them? If so, I'm thinking the 1st Amendment will cause those laws to be tossed in the trash.

Amendment I

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.​

Keridan had a great discussion about possible court interpretation. Would you like to add to that?
 
Isn't this just as susceptible to gang censorship as any other "popular vote" methodology?

No, because tagging is potentially atomizable down to individual articles/posts having individual single use tags (if people want to go that far, I doubt they will in most case unless they want remember karen's recipe for next year's party or something personally useful), so if people are interested in ever more obscure topics or subdivisions of more popular topics, then they can find that, thus bypassing echo chambers (or gangs) by looking for the content they want in whatever level of detail they desire.
 
Here is my proposal to combat the facebook propaganda (and other social media platforms to some extent) issues. The only portion of this that the platform should control is bot detection. The reason for the bot detection is that free speech should apply to humans and not software. In my opinion at least, the first amendment is sacrosanct, but its also meant for people and if we allow software to participate, that weakens it, as software has no moral instincts or need for prosocial behaviors. Because of that, that is the only limitation.

The rest of the proposal relies on known systems already in place on the internet, thus we can be sure they work well. I do propose one tweak (2d) in that nobody can completely turn off any flags as this will help combat ideological bubbles. The one exception of this would be criminal behavior (for example, NAMBLA). So that would mean we would still be subject to flat earth or antivaxxor information sometimes, but reasonable people can see that for what it is. This wold also provide a democratic means to help combat propaganda by state actors (Russia, for example).

One future proposal would be detection algorithms that flag emotionally manipulative language, but machine learning isn't there yet, so right now its off the table.

My proposal would be:
1. Work on systems to detect people vs bots (being worked on heavily right now) reliably, flag the bots.
2. Set up a reputation system (this historically works well for internet sites) that is run entirely by users for whatever platform.
2a. Set up a hashtag system to work with that reputation system, people will (if they act like they do on other platforms, which likely they will) standardize on key phrases on their own
2b. Users vote on those key phrases that people eventually settle on
2c. Users can set their preferences based on those key phrases in a control panel (I want to see less apple news and more android news, or less impeachment news and more cat videos, etc)
2d. No user can shut out any flag entirely (this is key!), just minimize it. This fights against echo chambers (exception being criminal activity)

I agree with the spirit of what you're trying to accomplish, but I'd target those outlets that bill themselves as news organizations. Facebook is just a message board essentially - it's not pretending to be news. Anything on it should be considered non-news: rumor, opinion, etc.

I'd apply your suggestions to news sites or shows. Keywords would be flagged for opinion, such as "damaging", "bombshell", or other adjectives describing events. For example, "Obama gave his speech tonight" would not be flagged, but if a news organization says "Obama gave a rousing speech tonight", the audience would be notified that this is opinion and not news.

I'd also add a "racial content" disclaimer based on the importance or mention of racial identification in events.

Again, this would be only for those entities that bill themselves as "news".
 
In this case, the bot would find the users, write the content, proof read it, and then mail it out without human intervention. They have become quite sophisticated.

This A.I. Bot Can Convincingly '''Write''' Entire Articles. It'''s So Dangerously Good, the Creators Are Scared to Release It | Inc.com

I understand. It's pretty amazing.

I'm of the belief there are AI based posters on this site, but I have no proof of that.

Like you, I've been around for awhile. Political blogging for 12 years or so. You can't help but develop a bit of a sense for things.

The dramatic increase in new posters around election time, as we are seeing on this site, always has me wondering what boiler room, or political group, has sent out it's "soldiers".
 
Exactly, bots are only as good as the training data, then once they are sufficiently trained (by whatever evaluation process), then once they do their thing, they are a force amplifier. One of the problems I am concerned about is that the human brain (to put it in computer terms) has only so much i/o (eyes, ears) and processing power and is increasingly unable to compete against computer networks that can scale much higher and are also self modifying (where the human brain is kind of stuck in the evolutionary sense, taking much longer to update).

We will be out competed if we are not sensible and this moves so fast, that we need to think about this kind of stuff today.

This derails into a whole different topic and, as someone who writes software and has researched AI extensively, I don't mind going there(period) But the core problem with any AI is that it only extends human logic that created it unless left to random algorithms that pretty much always end in self-destruction(period)

I might suggest a different thread if we are going to discuss the long-term eventualities of the growth of computer intelligence and the dangers

In the current topic, I'm sticking within current tech, which doesn't allow for escaping human control/design and therefor ends up back in the area of speech controlled by a governing body
 
This derails into a whole different topic and, as someone who writes software and has researched AI extensively, I don't mind going there(period) But the core problem with any AI is that it only extends human logic that created it unless left to random algorithms that pretty much always end in self-destruction(period)

I might suggest a different thread if we are going to discuss the long-term eventualities of the growth of computer intelligence and the dangers

In the current topic, I'm sticking within current tech, which doesn't allow for escaping human control/design and therefor ends up back in the area of speech controlled by a governing body

I agree, its another (and probably very interesting!) thread. However, weird things can happen within a well through out design (remember microsoft's nazi bot?)
 
I understand. It's pretty amazing.

I'm of the belief there are AI based posters on this site, but I have no proof of that.

Like you, I've been around for awhile. Political blogging for 12 years or so. You can't help but develop a bit of a sense for things.

The dramatic increase in new posters around election time, as we are seeing on this site, always has me wondering what boiler room, or political group, has sent out it's "soldiers".

Forgive the snarky, but you would have to define the artificial part of the intelligence for me to agree here ;)
 
You aren't wrong in your assessment, but you are excluding the problem that what those bots hunt for and where they focus that hunt are determined by the human who creates the program(period) They are just a tool like any other software, designed by a human to complete a specific task under the guidelines that human puts into them(period)

You are automating a task, not removing human control of it

Ok. So I'm not far off.

But I'm seeing a difference between an automated post, and one that is the result of a human on-line and live.

It's like the difference between a something you get in the mail from a mass mailing, versus a personal letter.

Perhaps I don't understand how a bot can hide it's origin, versus an actual live poster.
 
I understand. It's pretty amazing.

I'm of the belief there are AI based posters on this site, but I have no proof of that.

Like you, I've been around for awhile. Political blogging for 12 years or so. You can't help but develop a bit of a sense for things.

The dramatic increase in new posters around election time, as we are seeing on this site, always has me wondering what boiler room, or political group, has sent out it's "soldiers".

With the current level of tech (that I am aware of) they are probably still sweat shop workers right now.
 
I agree, its another (and probably very interesting!) thread. However, weird things can happen within a well through out design (remember microsoft's nazi bot?)

You aren't wrong at all, some of the cutting edge stuff is amazing and I love when I read a new way of approaching this stuff (and fear it a bit, but that's different)

I just don't think we are ready to give up any choices on speech to it yet
 
I agree with the spirit of what you're trying to accomplish, but I'd target those outlets that bill themselves as news organizations. Facebook is just a message board essentially - it's not pretending to be news. Anything on it should be considered non-news: rumor, opinion, etc.

I'd apply your suggestions to news sites or shows. Keywords would be flagged for opinion, such as "damaging", "bombshell", or other adjectives describing events. For example, "Obama gave his speech tonight" would not be flagged, but if a news organization says "Obama gave a rousing speech tonight", the audience would be notified that this is opinion and not news.

I'd also add a "racial content" disclaimer based on the importance or mention of racial identification in events.

Again, this would be only for those entities that bill themselves as "news".

Pretagging makes me uncomfortable and it goes into censorship in my opinion. People will make it a hobby to jump on those as soon as they are published anyway. We can see how this works by looking at wikipedia logs, and like with wikipedia, there are easy and nonintrusive ways to stabilize that as well.
 
You are still not following my several points on this topic. No matter. You are free to believe whatever you like such as the Russians, AntiFA, White Supremacists, whatever can post whatever they like on FB or any other Social Media platform as a “right”.

That is precisely what freedom of speech is about - obviously, "popular" speech requires no special protection but once the government extends legal protection (immunity from civil legal actions) to FB allowing it to censor (edit) content while enjoying special government protection based on FB's own decisions of what is "factual" speech then FB should accept legal responsibility for what FB is, in fact, publishing as "fact". Once we have a "ministry of truth" (even if in 'private' hands) which is immune from civil legal action for what it claims to be "factual" (by virtue of allegedly having removed "non-factual" content) then we have allowed FB to replace truth with it's own version of it with federal legal protections.
 
Last edited:
Ok. So I'm not far off.

But I'm seeing a difference between an automated post, and one that is the result of a human on-line and live.

It's like the difference between a something you get in the mail from a mass mailing, versus a personal letter.

Perhaps I don't understand how a bot can hide it's origin, versus an actual live poster.

I think maybe I was misreading your original direction - a bot is certainly capable of hunting other bots
 
Back
Top Bottom