Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How could it not be the fault of Facebook when Facebook designed the algorithms that are creating all of the divisiveness on Facebook?

If I build a bridge intending it to stay up and it happens to fall down 6 months later, I'm responsible for it. Facebook created an algorithm that divides people politically and that surfaces content that is provably fictional. So they should be held responsible for it regardless of their intent. They don't get to invoke "common carrier" status when they're writing software that makes decisions about what you do or don't see. What makes a telephone a "common carrier" is the fact that the telephone doesn't decide who you call.

It doesn't matter whether it's software or a human. What matters is that decisions are being made by Facebook about what you do or don't see.

Whether or not it is intentional is immaterial to the effect. The law doesn't care about your intent. I wouldn't intentionally dump toxic waste into a river but I'm liable for dumping whether I intended to or not. Mark Zuckerberg can't just throw up his hands and go "oops it's software I can't help it" when it's his company that made all of the decisions about how the software works.



> The law doesn't care about your intent.

This isn't correct. The law in most modern democracies, as far as I'm aware, is very concerned with intent.

This why we generally define murder and manslaughter as distinct.

Murder is the unlawful killing of another human without justification or valid excuse, especially the unlawful killing of another human with malice aforethought.

https://en.wikipedia.org/wiki/Murder

Manslaughter is a common law legal term for homicide considered by law as less culpable than murder

https://en.wikipedia.org/wiki/Manslaughter

Murder vs manslaughter is the extreme example, though you'll find courts are broadly quite concerned with intent.


Truth is not the point of social media. Facebook isn't an encyclopaedia. Humans already gravitate towards groups that validate their opinions. It makes sense for Facebook to show people content they want to see. It is incredibly Orwellian to say "Facebook should only show people content which corrects their wrong views". Facebook isn't a social conditioning tool. I find the alternative to "misinformation" much scarier. Misinformation and being mistaken are human flaws we will always have, and therefore any social groups will have them by default. Using technology as a tool to condition people out of their views against their will is scary.

The information is out there. There are reliable news sources. There are reliable databases and encyclopaedias and journalism. If people choose not to read them then that's on them.


>It makes sense for Facebook to show people content they want to see.

The problem is, Facebook doesn’t show people content that they want to see. They show people content that they will engage with. That’s a very important distinction.

HN algorithm/moderators actually explicitly do the opposite: if a thread gets too many comments too quickly, it’s ranked downward. The assumption is too many comments too quickly indicates a flamewar and the HN moderators want to keep a civil discussion. The approach Facebook takes is to “foster active discussion” which on the Internet typically means a flamewar. Noting generates engagement like controversial political views. So that’s what Facebook’s algorithm/moderators show to their users.

Facebook absolutely is a social conditioning tool, it’s designed from the ground up to show people content that stirs their emotions enough to click “like” or the mad face icon or even leave a comment and wait around until someone replies back.


My point is that this what happens in real life. People will continue to engage in stuff that they want to engage in. Facebook doesn't force people to engage in anything.

I think it is far worse to attempt to condition people by showing them things *they wouldn't otherwise engage with". What is scarier, showing somebody something they want to engage in based on their past behaviour, or showing somebody something they wouldn't have otherwise seen if you hadn't gone out of your way to shove it down their throat?

People seem to want Facebook to make people more placid. Oh you have extreme views? Here's, let's condition that out of you by only showing you more moderate stuff. Oh you think x is bad? Let's not show you anything to do with x so that you'll hopefully forget about it and not engage with that part of your brain any more.

Like I've already said, this alternative is far more Orwellian and far more of a tool for social control, than simply optimising for engagement.


> I think it is far worse to attempt to condition people by showing them things they wouldn't otherwise engage with". What is scarier, showing somebody something they want to engage in based on their past behaviour, or showing somebody something they wouldn't have otherwise seen if you hadn't gone out of your way to shove it down their throat?*

I don't think that makes sense, and I don't think that's what anyone's advocating for.

If you friend someone, or follow a page, or whatever, you are explicitly saying "I want to hear what this person/group has to say". They aren't saying "I want FB to carefully curate what this person/group says in order to increase my engagement of FB". FB shouldn't promote, hide, or reorder anything coming from someone who I've explicitly chosen to follow. It should just show me all of it, and let me decide what I do and don't want to see.


That's no distinction at all, what people engage with is just one effective way of measuring what people want to see. HN simply optimizes for something else, that's no less of a social conditioning tool than optimizing for engagement, just in a different direction. You could say that it's designed from the ground up to show people content that stirs their curiosity enough to comment cautiously, or to hide content that stirs their emotions enough to engage strongly.


Facebook has a fact-checking program. That program has third-party fact-checkers. Facebook has been documented as pressuring those third-party fact-checkers to change their rulings.

  https://www.facebook.com/business/help/2593586717571940?id=673052479947730
  https://www.fastcompany.com/90538655/facebook-is-quietly-pressuring-its-independent-fact-checkers-to-change-their-rulings
They can't impute that they checked facts, remove postsings they believe are incorrect, and then quietly put pressure on the fact-checkers to have a different "opinion" as to what is "factual".


IMHO we as a society and species are still figuring out mass communication.

Propaganda, misinformation and deception have always been human issue - mass media magnified them as it does everything else.

And I think we all have the right to critique social media, just like we can critique the news, books, movies, etc.

We don’t have to agree but the discussion is a valid one to have!

We get to help shape our society and world, after all :).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: