黑料门

Year in Review Part IV

February 12, 2019

As 2018 lies firmly behind us, CDEP Program Director Bastiaan Vanacker takes a look at some of the major digital ethics and policy issues of the past year that will shape the debate in 2019. The first three installments of this overview can be found here and here and here

October: A Supreme Court Case Rankles Silicon Valley

As social media platforms  last summer,  critics invoked the First Amendment to label this move as an attack on freedom of speech.  were quick to point out the flaw of this censorship argument: social networks are privately-owned enterprises free to decide what speech to ban from their digital spaces. However, last October the Supreme Court  that could up-end this analysis.

At first sight, the case does not seem to have anything to do with social media, as it evolves around the question of whether or not public access television networks should be considered state actors. The case stemmed from a pair of videographers who claimed that they had been barred from a public-access TV network because it disapproved of the content of their program. While such actions would be perfectly legal if done by private actors, under the First Amendment government actors may not restrict speech on that basis.

The network in question is privately-owned under a city licensing agreement.  Reversing a lower court, the U.S. Court of Appeals for the Second Circuit ruled that privately-owned public access networks are public forums, as they are closely connected to government authority. As a result, owners of these forums are considered government actors bound by the First Amendment when regulating speech, the court ruled.

It is quite possible that the Supreme Court will issue a narrow ruling that only applies to the peculiar and particular ownership structure of public access TV. However, if these types of networks were to be considered a public forum, an upholding of the decision in a broader ruling could have significant consequences for social media companies鈥 ability to regulate speech on their networks. In that case, only speech that runs afoul of the First Amendment could be removed from their networks and the government could at the same time dictate certain rules for content moderation. While the chances of the Supreme Court issuing a ruling broad enough to have this consequence seem slim, the mere possibility is sure to make this case one that will be closely watched.

November: Neo-Nazi Gets Sued for Online Hate Campaign

What is the difference between incitement and a threat? At first glance, the answer seems straightforward. Incitement requires that a speaker tells other people to engage in an illegal action, while a threat requires that a speaker credibly communicates to an individual an intent to harm that individual.

Both types of speech are not protected speech. Incitement is illegal because it leads to illegal actions that can harm someone鈥檚 safety. Threats are illegal because they put people in fear for their physical safety (which is why there is no requirement that the sender intends to execute the threat). But on the Internet this distinction is not always clear. What if a person posts the home addresses of staff from abortion clinics online, ? Does it matter if the poster claims to have just wanted to act as a record keeper?

Or what if an , after they created an episode mocking the prophet Mohamad? Does it matter that they claim they only wanted to 鈥渨arn鈥 them, if their message is accompanied by a picture of a slain filmmaker, killed by an extremist after being accused of mocking Islam?

In those instances, the messages are a mixture of threats and incitements. They are threats because they put the intended targets in fear of their lives, but at the same time, the senders do not communicate any intention to commit an act of violence. They merely suggest that others might/should commit these acts, rendering them more incitement than threats.

However, ever since  (1969), the standard that needs to be met to establish incitement is that the illegal action advocated in the speech is 鈥渄irected to inciting or producing imminent lawless action鈥 and that it is 鈥渓ikely to incite or produce such action.鈥 This is a high bar to be cleared, particularly for mediated Internet speech, where speakers are rarely in close proximity to one another and where there is often a time lapse between sending and reception of the message. It is unlikely for an online statement to meet the definition of incitement.

Consequently, speech that appears to be online incitement often is treated as a threat or intimidation. Take for example , a Jewish woman from Whitefish, MO who found herself in the cross hairs of a 鈥渢roll storm鈥 by the neo-Nazi site the Daily Stormer. She had drawn the ire of its founder, Andrew Anglin, after the mother of white nationalist Richard Spencer accused Gersh of strong arming her into selling her property in Whitefish because of her son鈥檚 radical politics.

Through the Daily Stormer, Anglin called on his followers to contact Gersh and to tell her what they 鈥渢hought about her,鈥 resulting in Gersh and her family being bombarded with vicious anti-Semitic hate messages. Some of these messages clearly constituted illegal threats, but they came from anonymous senders, not from Anglin, who had warned his followers not to engage in threats of violence.

Gersh nevertheless sued Anglin for invasion of privacy, intentional infliction of emotional distress, and violations of Montana's Anti-Intimidation Act. In November, a federal judge  the lawsuit on First Amendment grounds. How the Anti-Intimidation Act (essentially an anti-threat statute) will be applied to this case will provide further guidance on the applicability of anti-threat statutes to these types of online incitement.

December: Tumblr Bans Adult Content

In December, Tumblr鈥檚 ban on pornography took effect. The ban  by Tumblr鈥檚 removal from the Apple store due to the presence of child-pornography on its network. Barring all adult content might be more convenient than policing all the accounts containing nudity for the presence of under-aged subjects. The ban has been criticized because Tumblr was a preferred platform for people interested in less conventional ways of experiencing sexuality, who used it to self-express and find like-minded souls.

Even though these users might ultimately find a platform and community elsewhere, the issue did bring to light yet again the ultimate powerlessness of users against the arbitrary content-restricting decisions made by the powers that be in Silicon Valley. Mark Zuckerberg has a suggestion for how to make this process more democratic and transparent: a , in which various stakeholders could be involved in this decision-making process. While this seems more a thought experiment than concrete plan, the mere suggestion of farming out this crucial decision-making task illustrates how exasperated social media platforms have grown with the damned-if-you-do-damned-if-you-don鈥檛 reality of censoring online content. A dilemma that is unlikely to be resolved anytime soon.


Bastiaan Vanacker's 
work focuses on media ethics and law and international communication. He has been published in the Journal of Mass Media Ethics. He is the author of  and the editor of .

February 12, 2019

As 2018 lies firmly behind us, CDEP Program Director Bastiaan Vanacker takes a look at some of the major digital ethics and policy issues of the past year that will shape the debate in 2019. The first three installments of this overview can be found here and here and here

October: A Supreme Court Case Rankles Silicon Valley

As social media platforms  last summer,  critics invoked the First Amendment to label this move as an attack on freedom of speech.  were quick to point out the flaw of this censorship argument: social networks are privately-owned enterprises free to decide what speech to ban from their digital spaces. However, last October the Supreme Court  that could up-end this analysis.

At first sight, the case does not seem to have anything to do with social media, as it evolves around the question of whether or not public access television networks should be considered state actors. The case stemmed from a pair of videographers who claimed that they had been barred from a public-access TV network because it disapproved of the content of their program. While such actions would be perfectly legal if done by private actors, under the First Amendment government actors may not restrict speech on that basis.

The network in question is privately-owned under a city licensing agreement.  Reversing a lower court, the U.S. Court of Appeals for the Second Circuit ruled that privately-owned public access networks are public forums, as they are closely connected to government authority. As a result, owners of these forums are considered government actors bound by the First Amendment when regulating speech, the court ruled.

It is quite possible that the Supreme Court will issue a narrow ruling that only applies to the peculiar and particular ownership structure of public access TV. However, if these types of networks were to be considered a public forum, an upholding of the decision in a broader ruling could have significant consequences for social media companies鈥 ability to regulate speech on their networks. In that case, only speech that runs afoul of the First Amendment could be removed from their networks and the government could at the same time dictate certain rules for content moderation. While the chances of the Supreme Court issuing a ruling broad enough to have this consequence seem slim, the mere possibility is sure to make this case one that will be closely watched.

November: Neo-Nazi Gets Sued for Online Hate Campaign

What is the difference between incitement and a threat? At first glance, the answer seems straightforward. Incitement requires that a speaker tells other people to engage in an illegal action, while a threat requires that a speaker credibly communicates to an individual an intent to harm that individual.

Both types of speech are not protected speech. Incitement is illegal because it leads to illegal actions that can harm someone鈥檚 safety. Threats are illegal because they put people in fear for their physical safety (which is why there is no requirement that the sender intends to execute the threat). But on the Internet this distinction is not always clear. What if a person posts the home addresses of staff from abortion clinics online, ? Does it matter if the poster claims to have just wanted to act as a record keeper?

Or what if an , after they created an episode mocking the prophet Mohamad? Does it matter that they claim they only wanted to 鈥渨arn鈥 them, if their message is accompanied by a picture of a slain filmmaker, killed by an extremist after being accused of mocking Islam?

In those instances, the messages are a mixture of threats and incitements. They are threats because they put the intended targets in fear of their lives, but at the same time, the senders do not communicate any intention to commit an act of violence. They merely suggest that others might/should commit these acts, rendering them more incitement than threats.

However, ever since  (1969), the standard that needs to be met to establish incitement is that the illegal action advocated in the speech is 鈥渄irected to inciting or producing imminent lawless action鈥 and that it is 鈥渓ikely to incite or produce such action.鈥 This is a high bar to be cleared, particularly for mediated Internet speech, where speakers are rarely in close proximity to one another and where there is often a time lapse between sending and reception of the message. It is unlikely for an online statement to meet the definition of incitement.

Consequently, speech that appears to be online incitement often is treated as a threat or intimidation. Take for example , a Jewish woman from Whitefish, MO who found herself in the cross hairs of a 鈥渢roll storm鈥 by the neo-Nazi site the Daily Stormer. She had drawn the ire of its founder, Andrew Anglin, after the mother of white nationalist Richard Spencer accused Gersh of strong arming her into selling her property in Whitefish because of her son鈥檚 radical politics.

Through the Daily Stormer, Anglin called on his followers to contact Gersh and to tell her what they 鈥渢hought about her,鈥 resulting in Gersh and her family being bombarded with vicious anti-Semitic hate messages. Some of these messages clearly constituted illegal threats, but they came from anonymous senders, not from Anglin, who had warned his followers not to engage in threats of violence.

Gersh nevertheless sued Anglin for invasion of privacy, intentional infliction of emotional distress, and violations of Montana's Anti-Intimidation Act. In November, a federal judge  the lawsuit on First Amendment grounds. How the Anti-Intimidation Act (essentially an anti-threat statute) will be applied to this case will provide further guidance on the applicability of anti-threat statutes to these types of online incitement.

December: Tumblr Bans Adult Content

In December, Tumblr鈥檚 ban on pornography took effect. The ban  by Tumblr鈥檚 removal from the Apple store due to the presence of child-pornography on its network. Barring all adult content might be more convenient than policing all the accounts containing nudity for the presence of under-aged subjects. The ban has been criticized because Tumblr was a preferred platform for people interested in less conventional ways of experiencing sexuality, who used it to self-express and find like-minded souls.

Even though these users might ultimately find a platform and community elsewhere, the issue did bring to light yet again the ultimate powerlessness of users against the arbitrary content-restricting decisions made by the powers that be in Silicon Valley. Mark Zuckerberg has a suggestion for how to make this process more democratic and transparent: a , in which various stakeholders could be involved in this decision-making process. While this seems more a thought experiment than concrete plan, the mere suggestion of farming out this crucial decision-making task illustrates how exasperated social media platforms have grown with the damned-if-you-do-damned-if-you-don鈥檛 reality of censoring online content. A dilemma that is unlikely to be resolved anytime soon.


Bastiaan Vanacker's 
work focuses on media ethics and law and international communication. He has been published in the Journal of Mass Media Ethics. He is the author of  and the editor of .