Following the conviction of Derek Chauvin for the murder of George Floyd in Minneapolis, four-time NBA champion LeBron James tweeted a picture of a Columbus police officer involved in the shooting of 16-year-old Ma’Khia Byrant with the caption “YOU ARE NEXT # RESPONSIBILITY.”
Many quickly accused the basketball star of using Twitter to incite violence. The tweet, which has since been deleted, arrived just days after spokeswoman Maxine Waters (D-Calif.) Called on protesters to “stay on the street” and even become “more confrontational” if Chauvin is released.
This led to a firestorm on social networks, in which there was no lack of comments about the misunderstanding.
First Amendment and social media
Many have tried to turn this into a “First Amendment” because James, Waters and others have a right to express their opinion. However, that would be a misreading of the Constitution. The first amendment actually refers to government-oriented speech and whether the government will censor it.
“The First Amendment restricts only government regulations or the punishment of expression, although it does not apply only to government-directed speech; the First Amendment protects expression in general unless the term falls into one of the very narrow categories of unprotected speech,” Chicago lawyer Ari Cohn explained. specializing in First Amendment issues.
“Private companies, including social media platforms, are not limited by the First Amendment,” Cohn added. “On the contrary, they are protected by it. They have their first amendment right to determine which speech they want to allow on their private property, much like you or I have the right to kick someone out of our house because they said something we don’t like.”
It is also important to recognize from the outset that social media companies are private actors, and in addition to not being subject to the First Amendment, they can set and implement any policies they deem appropriate.
“A good example of this is, of course, Twitter’s January 2021 decision to ban President Donald Trump,” said Bob Jarvis, a lawyer and law professor at Nova Southeastern University.
“However, at the same time, social media companies (SMCs) are protected by section 230 of the Decency in Communication Act,” Jarvis added. “This 1996 federal law protects websites from liability for content that their users create, publish, and comment on. Section 230 assumes that SMCs only provide platforms for users to express their thoughts, and SMCs neither review nor support such content or comments. , although SMCs are not limited to action by the First Amendment, Section 230 gives them an incentive not to act. “
The rules of the old media still apply
Another consideration is that many traditional or “old media” rules are still heavily applied to social networks, even if the content is generated differently.
“Social media companies are private individuals like newspapers, except they don’t provide their own content,” noted James R. Bailey, a professor of leadership at George Washington University Business School.
“Other people provide that content,” Bailey explained. “Newspapers can make a decision to publish something or not. But here social media is interrupted because they don’t create their own content and because they don’t create their own content, so they are in the unfortunate position of having to police someone else’s content, then jump in and say if it’s appropriate or no. “
Some users have called for “censorship” when their post or photo is removed or when more extreme measures are taken, such as removing them from the service. However, this is still not a First Amendment issue as these are rules imposed by companies.
“What we allow and what we don’t allow, we’ve already seen. Facebook and Twitter have lashed out at the last president and rejected several other people and that essentially controls the content,” Bailey added.
In addition, it could be argued that social media companies, like any other company, have an ethical duty to run their business responsibly. This could include not allowing the disclosure of someone’s personal information as an example. Then there should be appropriate reactions on social networks.
“An integral part of that ethical duty is to avoid predictable harm to others,” suggested Robert Foehl, executive director of business law and ethics on the Ohio University Master of Business Administration online program.
“If an individual or group uses a social media platform to encourage acts of violence against another person or someone else’s property, then the social media company has an ethical obligation to take corrective action to remove such published content so that the resulting damage is minimized,” Foehl added. “But this reactive response is not enough. The social media company has an ethical duty to take proactive measures to avoid publishing such content at all – thus avoiding predictable harm.”
Incitement to violence
Although it could be debated for a long time whether Rep. Waters or Mr. James intended to incite violence, there have been cases where some have certainly used social platforms to do so. In such a case, companies with social networks should certainly take responsibility, as they have done in the past.
“Donald Trump’s recent ban on Twitter, following the January 6 uprising, showed that these platforms recognize the responsibility they have in maintaining and mitigating potentially harmful misinformation,” explained Jui Ramaprasad, a professor in the Department of Decision-Making, Operations and Information Technology at the business school. Robert H. Smith University of Maryland.
“While banning users is one option, platforms are also starting to think – and should think – about how information / misinformation is being spread and what content should be privileged,” Ramparasad added. “When algorithms are behind this process of determining what we see on our feed – no matter what platform we are on – it’s not clear whether” true “information is privileged over false.”
However, social media can differ from other forms of mass media in that communication can spread so quickly. A simple tweet can go viral in minutes. Even if the original poster erases it, it can get its own life – as many celebrities have discovered when they are too fast with their thumbs.
“For example, it is very unlikely that a newspaper article could be an incentive,” Cohn said. “It’s hard to imagine that an article could cause people to immediately put the newspaper aside and go out on illegal acts. Social media adds an interesting wrinkle given the real-time communication that continues. Can a tweet be an incentive? It’s much easier to imagine the circumstances in to which he could theoretically do so than in the case of a newspaper article. “
It is therefore easy to see why social media platforms had to muffle some voices and why companies had to deal with disseminating content that could be seen to even hint at incitement to violence.
“Social media platforms that remove content that calls for violence are absolutely within their jurisdiction, and there is certainly a strong moral argument that they should actually do so,” Cohn added. “I think you would find a lot of people under a lot of pressure who think platforms should allow serious threats of violence (as opposed to rhetorical hyperbole) to be published without moderation.”
The way social media addresses these issues could be a challenge, especially since the technology is so new and continues to evolve.
“Social media is similar to mass media, but they are a different beast nonetheless,” said Dr. Matthew J. Schmidt, associate professor of national security and political science at the University of New Haven.
“There’s still a lot to figure out how social media can fix things when people post posts that upset others, but we get mixed up,” Schmidt added. “We’re not exactly digital natives. Maybe the kids who are emerging now, who were born into the world of social media, might understand this best. We built it, but we’re still outsiders. They’ll live and see how such content can be properly moderated. . “