Does the Additional Protocol to the Cybercrime Convention mean that hate speech is more unregulable in the USA?



Regulating cyber hate speech is a challenge to legislators, due to the practical limitations of the law. [1] This paper will demonstrate firstly, the necessity of regulation, and the ineffectiveness of unilateral and multilateral laws, such as the Additional Protocol on the Cyber Crime Convention[2] (Additional Protocol). Whilst examining the First Amendment of the American Constitution (First Amendment), this paper will reason that the U.S. exists as a hate speech haven. Lastly, this paper will show while the above is true, hate speech is no more unregulable because of this, as a technological approach can provide a solution.

The Harm in Hate Speech

            If hate speech is inherently harmful, it should follow that it necessitates regulation. Waldron takes this view, submitting that regulation provides public protection, ensuring dignity, security and assurance to governed citizens.[3] Post concurs, stating hate speech has the ability to cause harm collectively to society, as well as to individuals.[4] It has been argued that if left unregulated, hate speech ‘could cause society to lose its civility’.[5] Indeed, the dangers of Internet hate speech were exemplified when Benjamin Smith was influenced to conduct a killing spree after viewing hate content online.[6] However, whilst most countries recognise the need to regulate hate speech,[7] regulatory power of national government has faced significant barriers due to the global infrastructure of the Internet.[8]
            The lack of geographical borders augments the difficulties of regulation. Subsequently, the harms of hate speech are amplified, as the Internet captures a much larger audience than traditional crime methods.[9] Yet unilateral legal approaches have been ineffective, as jurisdictional differences render laws unenforceable extraterritorially.[10] It is perhaps unsurprising then, where multilateral pursuits to address hate speech have seen fruition, they have prima facie sought praise. Nemes consequently heralded the Additional Protocol as ‘one of the most significant advances’ in regulating hate speech online.[11] According to Rorive, international agreements are ‘a logical way to escape’ the jurisdictional dead-end of the Internet.[12] Why then, did Leiter write 8 years after the Additional Protocol’s conception, that the Internet continues to be littered full of ‘cess-pools’ of hate speech?[13] The multilateral approach, although theoretically advantageous, is ultimately flawed.

The Ineffectiveness of Law      

This second part will attempt to answer Leiter, showing the inadequacy of both unilateral and multilateral legal regulation. The failings of traditional laws are primarily attributed to the First Amendment, and the omission of the U.S. to sign and implement the Addition Protocol.[14] The First Amendment provides that it shall create ‘no law…abridging the freedom of speech’.[15] Thus, the U.S. provides arguably the highest protection to expression, and consequently invokes a liberal approach toward racist speech.[16] The implication of this is that websites blocked through unilateral law can simply re-appear on an Internet Service Provider (ISP) in the U.S.[17] Hence, unilateral efforts are simply not an effective regulatory solution.[18] As Foxman and Wolf state, ‘like chasing cockroaches, squashing one offending website, page, or service provider does not solve the problem’.[19]
The ineffectiveness of unilateral law was epitomised in LICRA v Yahoo![20] where Yahoo! received a French court order to filter offensive content to French citizens. Refusing to enforce the order, U.S. Judge Fogel ruled ‘this court may not enforce a foreign order that violates the protections of the United States Constitution by chilling protected speech that occurs simultaneously within our borders’.[21] Despite Yahoo! ultimately restricting the offensive content, it is argued that failures to enforce transnational laws have had a trivialising effect on law.[22] Consequently a unilateral application of local law is perhaps more costly than it is worth.[23] Barendt finds this ‘unacceptable’,[24] to which Hopkins concurs, opining that such laws may continue to be ineffective unless crime standardisation is achieved.[25] This is unlikely to become a reality, as it would ‘infringe upon domestic legal regimes and cultures’,[26] as was evident in the Yahoo! litigation. Recent cases such as R v Sheppard[27] demonstrate how the U.S. continues to undermine legal approaches. It is submitted that multilateral laws fall foul of the same problem. As the U.S. has refused to ratify the Additional Protocol, there seems little reason to think it may have a substantive effect on regulating hate speech in future cases. Consequently, further judicial stalemates are likely to continue.[28]

It is argued that the U.S has promoted the First Amendment as a ‘global speech norm’.[29] In Reno v ACLU, the U.S. Supreme Court ruled that the Internet is ‘entitled to the highest protection from governmental intrusion’.[30] Considering this, the U.S. treats the Internet uniquely, failing to realise Waldrons assertions of the harm in hate speech. This special treatment was exemplified in Bachellar v Maryland[31] where the Supreme Court ruled that the ‘public expression of ideas may not be prohibited merely because the ideas are themselves offensive to some of their hearers’.[32] A multilateral legal framework would necessitate the approval of the U.S., otherwise enforcement would most likely fail in the same way as the Yahoo! litigation did. Approval would be very unlikely, as Vick highlights the futility of the U.S. signing a multilateral law combating hate speech.[33] The superlative hierarchical position of the Constitution means the U.S. ‘cannot agree to any treaty provision that would offend the First Amendment’.[34] Therefore, it would simply be ruled unconstitutional. As such limitations to speech are virtually non-existent in the U.S., without a serious prospect of change, this paper concludes that it is a hate speech haven.

Defenders of free speech have argued that as the First Amendment is subject to exception, it provides some protection from harm.[35] Most relevant, is that speech may be exempt from constitutional protection if it is considered ‘fighting words’[36] requiring speech to be so offensive that it causes immediate violence as a reactionary response. It further requires that speech is directed at those physically present, who would be induced into violence that an ordinary person could not control.[37] Yet Nemes points out, the narrowness of this exception is unreflective of hate speech victims, who are more likely to withdraw rather than respond aggressively.[38] Furthermore, as authors of hate speech are ‘rarely in the physical presence of someone who might be provoked’[39] it fails to fulfil the captive audience requirement; ‘people are free to leave the vicinity of a computer screen.’[40] Clearly, the exception will seldom apply to Internet. Even outside of Internet spheres, there has never been a sustained conviction under the fighting words exception.[41] Thus, it seems it does little to mitigate the harms that may arise from such highly protected expressional freedom. As the U.S. continues to exercise dominance on global speech regulation, it may seem that the Additional Protocol is powerless without the support of the U.S. Ultimately, the Additional Protocol serves as nothing more than a symbolic public condemnation of bigotry.[42] 

A Technical Solution

Although the failures of legal regulation are significant, to say that hate speech is unregulable is possibly inaccurate.[43] Where legal controls alone may be ineffective,[44] a technological approach may be more appropriate. Thus, the development of software filtering hate speech might be a remedy, enabling states to control the distribution of information depending on geographical destinations.[45] The implementation of such technology could resolve international differences through ‘zoning’.[46] Such a method would see states block extraterritorial material contrary to national laws using location technology and IP addresses to filter incoming odious material.[47] As opposed to court ordered website blocking, ‘receiving states’ could effectively filter all undesired content automatically.[48] This would perhaps solve the issue in the Yahoo! dispute, whilst remaining compliant with the legal and cultural standards of the jurisdiction in which content is displayed.

However, this approach has found criticism, as it has been argued that technological solutions are ‘limited’, citing the failure of the ‘HateFilter’ software showcasing the inability of regulation to keep up with the evolution of the Internet.[49] Yet Watt argues, as technologies improve, filtering techniques will become more competent at effectively regulating.[50] The challenge of regulating web 2.0 technologies is significant,[51] yet if such technologies were given time to mature and develop, it is submitted they would become increasingly effective at regulating undesirable content. For example, YouTube developed a sophisticated pornography filter, eliminating explicit content before it reached users, with a significant rate of success.[52] It seems reasonable to think that this technology could apply to hate speech within other Internet areas too. Such a solution is opposed further, in fear that over-regulation would become a consequence of any technological inadequacy.[53] Yet Leiter submits that any over-regulation from better regulation is ‘offset many times over’ considering the harm that hate speech inflicts.[54] While technological solutions may be imperfect, they are perhaps more desirable than relying on law to ‘solve the riddle of international conflicts.’[55]

The dichotomy of who should regulate is perhaps a more difficult discussion. Cohen-Almagor submits that regulation should be imposed privately.[56] However, regulation left solely to the devices of companies such as Google lack accountability and public scrutiny, and simply award more power to those who already assert great control over the Internet.[57] Also, the economic burden of implementing regulation, coupled with the lack of incentives from legal requirements means there is actually little encouragement for this to occur effectively.[58] Rather, moral incentives are far more prominent to ensure State implementation. This might be more appropriate, as political accountability could provide safeguards from excessive censorship. State regulation also allows the tailoring of regulation to fit its conception of public welfare, and the issue of over-regulation may be avoided by burdening the cost to the State.[59] Whereas this may be appropriate for European Countries, hate speech in the U.S. would remain an issue.

This approach is unlikely to be popular among cyber-libertarians. The interaction between netizens and state is central to the legitimacy and thus success of regulation.[60] Consequently, transparency is crucial to ensure such legitimacy of any method of regulation,[61] and could ensue, if effectively communicated to the public.[62] This paper ultimately concurs with Banks; the most effective regulation is one that utilises ‘governments, business and citizenry to engage in an individual and collective effort to minimise online hate speech.’[63] While a technological solution is preferred, law and other regulatory methods should be incorporated to provide satisfactory regulation.

Conclusion

In conclusion, it is clear to see that while hate speech should be regulated, both unilateral and multilateral legal approaches are insufficient. This paper concurs with the statement that the U.S. is a hate speech haven, and while the Additional Protocol has a symbolic significance, it is unable to regulate effectively due to the conflict with the First Amendment. However this paper does not find hate speech to be more unregulable because of this ineffectiveness. A technology-based, state imposed filtering system, if implemented correctly, could provide an adequate solution.  



[1] Abraham H. Foxman & Christopher Wolf, Viral Hate: Containing its Spread on the Internet (Palgrave Macmillan, 2013) 71.
[2] Council of Europe, ‘Additional Protocol to the Convention on Cybercrime, Concerning the Criminalisation of Acts of Racist and Xenophobic Nature Committed Through Computer Systems, [2003]
[3] Jeremy Waldron, The Harm in Hate Speech (Harvard University Press, 2012) 16.
[4] Robert C. Post, ‘Racist Speech, Democracy, and the First Amendment’, (1991) 31 Wm.& Mary L.Rev. 267, 236-327.
[5] Irene Nemes, ‘Regulating Hate Speech in Cyberspace: Issues of Desirability and Efficacy’, (2002) 11 (3) I.& C.T.L. 193, 197.
[6] ‘US Comment on the Internet’s Influence Following Benjamin Smith’s shooting spree’ The Independent (London, 8 July 1999) <http://www.independent.co.uk/arts-entertainment/monitor-us-comment-on-the-internets-influence-following-benjamin-smiths-shooting-spree-1104911.html>
[7] Yulia Timofeeva, ‘Hate Speech Online: Restricted or Protected? Comparison of Regulations in the United States and Germany’, (2003) 12(2) J.Transnat’l L.& Pol’y 253, 254.
[8] Ben Wagner, ‘Governing Internet Expression: How Public and Private Regulation Shape Expression Governance’, (2013) 10 Journal of Information Technology & Politics 389, 390.
[9] Shannon L. Hopkins, ‘Cybercrime Convention: A Positive Beginning to a Long Road Ahead’, (2003) 2 JTHTL 101, 102.
[10] James Banks, ‘European Regulation of Cross-border Hate Speech in Cyberspace: The limits of Legislation’, (2011) 19 European Journal of Crime, Criminal Law and Criminal Justice 1, 6.
[11] Nemes (n 5) 200.
[12] Isabelle Rorive, ‘Strategies to Tackle Racism and Xenophobia on the Internet – Where are we in Europe?’, (2002) 7 I.J.C.L.P 1, 5.
[13] Brian Leiter, ‘Cleaning Cyber-Cesspools: Google and Free Speech’ in Saul Levmore and Martha Nussbaum (eds), The Offensive Internet (Harvard University Press, 2010).
[14] Foxman and Wolf, (n 1) 81.
[15] U.S. CONST. amend 1.
[16] Jeremy Lipschultz, Free Expression in the Age of the Internet (Westview Press, 2000) 56.
[17] Foxman and Wolf (n 1) 82.
[18] Tarlach McGonagle, ‘The Council Of Europe Against Online Hate Speech’, 30 <http://www.coe.int/t/dghl/standardsetting/media/belgrade2013/McGonagle%20-%20The%20Council%20of%20Europe%20against%20online%20hate%20speech.pdf>
[19] Foxman and Wolf (n 1) 82.
[20] LICRA et UEJF v Yahoo! Inc. & Yahoo France [2000]
[21] Yahoo! Inc v LICRA, 145 F. Supp. 2d 1168 [ND Cal 2001] 1192
[22] Foxman and Wolf (n 1) 81.
[23] Matthew Fagin, ‘Regulating Speech Across Borders: Technology vs. Values’, (2003) 9 Mich. Telecomm. Tech. L. Rev. 395, 428.
[24] Eric Barendt, Freedom of Speech (2nd edn, OUP, 2005) 473.
[25] Hopkins (n 9) 114.
[26] Ibid 114
[27] [2010] EWCA Crim 65
[28] Fagin (n 23) 428.
[29] Wagner (n 8) 390.
[30] Reno v ACLU, 521 U.S. 844 [1997], 863.
[31] 397 U.S. 564 [1970].
[32] Bachellar v Maryland  (n 31) 567.
[33] Douglas W Vick, ‘Regulating Hatred’ in Mathias Klang and Andrew Murray (eds) Human Rights in the Digital Age (Routledge-Cavendish, 2005).
[34] Ibid
[35] Ibid
[36] Chaplinsky v New Hampshire, 315 U.S. 568 [1942]
[37] Nemes (n 5) 209.
[38] Ibid
[39] Vick (n 33)
[40] Barry Steinhardt, ‘Hate Speech’ in Yaman Akdeniz, Clive Walker and David Wall, (eds) The Internet, Law and Society (Pearson Education, 2000) 271.
[41] James B. Jacobs and Kimblery Potter, Hate Crimes: Criminal Law and Identity Politics (OUP, 1998) 113.
[42] Foxman and Wolf (n 1) 82
[43] Ibid 83.
[44] Andrew Murray, Information Technology Law (2nd edn, OUP, 2013) 135.
[45] Barendt (n 24) 473.
[46] Horatia Watt, ‘Yahoo! Cyber-Collision of Cultures: Who Regulates?’, (2003) 24 Mich. J. Int’l L. 673, 687
[47] James Banks, ‘Regulating Hate Speech Online’ (2010) 24(3) International Review of Law, Computers and Technology, 233, 237.
[48] Watt (n 46)
[49] Foxman and Wolf (n 1) 91-92.
[50] Watt (n 46)
[51] Murray (n 44) 128
[52] Foxman and Wolf (n 1) 105.
[53] Ibid 91-92.
[54] Leiter (n 15)
[55] Caitlin Murphy, International Law and the Internet: An Ill-Suited Match’ (2002) 25 Hastings Int’l & Comp. L. Rev. 405, 415.
[56] Raphael Cohen-Almagor, ‘Freedom of Expression, Internet Responsibility, and Business Ethics: The Yahoo! Saga and Its Implications’ (2012) 106 J. Bus. Ethics 353, 361.
[57] Ian Brown and Chris Marsden, ‘Regulating Code: Towards Prosumer Law?’ (SSRN 2013) 1-3 <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=222463>
[58] Watt (n 46) 693.
[59] Ibid 694.
[60] Lawrence Lessig, Code 2.0 (Basic Books, 2006) 122.
[61] Cohen-Almagor (n 56) 361
[62] Ibid
[63] Banks (n 47) 238.