EU economic losses in the haze of jihad. Still stuck in an affordable blame game on content moderation (part 5) – by Daniele M. Barone

In November 2020, following the terrorist attacks in France, Germany, and Austria, the European Council[i] stated that “access to digital information is becoming ever more crucial and the mobility of this data demands effective cross-border instruments, because otherwise terrorist networks will in many cases be a step ahead of the investigating authorities … access to the digital information, that is essential for preventing and eliminating terrorist action must be ensured and boosted.[ii]

It is well-known how in the field of online propaganda, radicalization[iii], and terrorist financing, non-incisive regulations on social media and end-to-end chats can become a barrier to an effective counter-terrorism strategy. In this field, the dilemma of taking something from privacy to improve security, that is flowing from enhancing access to online communication, might affect also the economic aspect related to the spread of these instruments. In this respect, Wojciech Wiewiórowski, European Data Protection Supervisor, claimed: “encryption is as critical to the digital world, as is the physical lock to the physical world”. Then, to stress the need to differentiate the approach for requesting lawful access that can be applied to different technologies or means of communication, he declared that is useless to focus only on a strict dichotomy between “confidentiality of communications can never be restricted” or “law enforcement will be unable to protect the public unless it can obtain access to all encrypted data”. To satisfy the requirement of proportionality, the legislation must lay down clear and precise rules governing the scope and application of these measures and impose that the people whose personal data is affected have sufficient guarantees that their data will be effectively protected against the risk of abuse[iv].

It has to be taken into account that, besides ethical dilemmas, an institutional man-in-the-middle approach in these sectors may directly affect the core business of hosting service providers, causing impacts on investments, users’ behavior, and government budget spending. However, even though a 100% level of security is utopian, nowadays, with technologies and human resources currently available and EU or member states’ ongoing plans and regulations, is an overall control over social media still possible? Then, even if it was, is it directly proportional to an increase in radicalization and terrorist attack prevention? To answer these questions, it is useful to analyze what has been done to control open-source social media platform contents in both the private and the public sector so far.

Facebook and the multifaceted cost of content monitoring

End-to-end encryption is a security tool used by some apps and services (e.g. WhatsApp[v], Signal[vi], and Telegram)[vii] to provide a greater level of privacy and securing communication, by applying encryption to messages before they leave the sender’s device and allowing only the device to which it is sent to  decrypt it. This process makes providers’ servers act as blind routers, passing messages on without being able to read them and securing messages intercepted during transmission by a hacker or a government agency.[viii]

So far, institutions have bypassed encryption barriers through the injection of state-sponsored malware on target devices, as, for example, the Italian Legislative Decree n. 216/2017 which introduced the use of the trojan software during investigations.[ix] Furthermore, there are also old methods to fight increasingly sophisticated crime, as the $900,000 FBI expense to hack the San Bernadino shooter’s $350 iPhone 5.[x] Or also the case of London Attacker, Khalid Masood, that used Facebook-owned fully encrypted chat service, Whatsapp, to declare he was waging jihad in revenge against Western military action in Muslim countries in the Middle East. The message detection was made possible only because Masood’s mobile telephone was recovered after he was shot dead[xi]. Discovering Masood’s last recorded thoughts was the key part of the investigation into what lay behind the assault. A result brought by human and technical intelligence rather than end-to-end chat monitoring.[xii]

Cases like this generated an increasing pressure from institutions to the private sector to regulate contents spread on social media. Indeed, from the chat providers point of view, end-to-end encryption doesn’t only represent a move towards users’ right to privacy but also a discharge of responsibility that allows them being no longer bound to create backdoor access to users’ messages.[xiii] In this regard, Facebook, in contrast to its business built around the monetization of user data, plans to make all messages on the app fully end-to-end encrypted by default.[xiv] Indeed, this change, which imposes a complex and long-lasting re-architecture of the entire product involving an expensive rebuilding of every feature of Facebook Messenger,[xv] is likely to make the company physically unable to moderate a large part of encrypted contents in users chats.[xvi]

Despite the costs of changing the messaging infrastructure and being deprived of over 2.7 billion monthly active users’[xvii] private conversations, Facebook’s priority seems to be that, with end-to-end encryption, the company will no longer have backdoor access to users’ messages. Thus, it won’t be forced to comply with requests from law enforcement agencies to access data.

According to researchers and journalists, this move seems to be more related to the growing pressure applied on Facebook to moderate user content by Australia, the US, the EU, and the UK with the threat of sanctions, rather than to the accomplishment of the legitimate requests made by privacy advocates.[xviii] Indeed, content moderation is becoming an ever-growing issue for the company.

In 2017, Facebook had more than 7000 content moderators[xix]. They earned roughly $15 per hour,[xx] a fraction of what full-time workers earn (median annual salaries for Facebook employees was $240,000 in 2017), and, after only a two-week training course,[xxi] they started deciding if removing or escalating terrorist content, flagged either by users or algorithms, by looking at the captions as well as the images themselves.[xxii]

In May 2020, Facebook agreed to pay $52mn to current and former moderators to compensate them for PTSD[xxiii] developed on the job.[xxiv] Besides the relatively irrelevant cost for the company, this episode highlighted the its lack of consciousness on such a delicate issue as content monitoring.

With global IP traffic predicted to grow at a compound annual growth rate[xxv] of 20% from 2018-2023[xxvi], the number of Facebook content moderators nowadays has already doubled (roughly 15,000, at 20 sites globally, who speak over 50 languages combined) and they’re mostly outsourced from companies like Accenture, Cognizant,[xxvii] Arvato, and Genpact[xxviii]. Moreover, once the number of moderators raised in only two years, the working conditions deteriorated and training for moderators was depleted.[xxix] These events inevitably brought to a 10% flagging posts error rate, as Facebook has itself admitted.[xxx] Given that reviewers have to wade through three million posts per day, that equates to 300,000 mistakes daily.[xxxi]

Nevertheless, only in the second quarter of 2020 Facebook removed about 8.7 million pieces of terrorist contents (according to the company’s definition: non-state actors that engage in or advocate for violence to achieve political, religious or ideological aims).[xxxii] But researchers, nowadays as in the past,[xxxiii] argue it is still impossible to gauge just how many posts escape the dragnets on a platform so large.[xxxiv] In this respect, automated systems using AI and machine learning notably invoked by Facebook CEO as the future solution to Facebook’s current political problems, are certainly helping with moderation. AI classifies user-generated content based on either matching or prediction, leading to a decision outcome (e.g. removal, blocking, account takedown)[xxxv] theoretically making suspect contents quicker to process to human moderators, at a later stage.[xxxvi]

Using a technique called Whole Post Integrity Embeddings (WPIE) Facebook’s systems ingest deluges of information, including images, videos, text titles and bodies (that can translate between 100 languages)[xxxvii], comments, text in images from optical character recognition, transcribed text from audio recordings, user profiles, interactions between users, external context from the web, and knowledge base information. Then, fusion models combine the representations to create millions of embeddings, which are used to train learning models that flag content for each category of violations.[xxxviii] In early January 2020, the company also released software that turns speech into text in real-time, opening up the possibility of better captioning of live video[xxxix]. Nonetheless, not every content can be classified, even by humans. Some posts have many shades of meaning or are very context-dependent, making it crucial to find the right balance between technology and human expertise.

In 2018, when Facebook stated that 99% of terrorist content on the platform were deleted, the Counter Extremism Project found that some of the most prolific Islamist extremists remained active on Facebook[xl]. Nowadays, for instance, the Islamist preacher who reportedly played a role in radicalizing Bataclan suicide bomber,[xli] Oman Mostefai, through sermons at a Paris mosque, continues, at the time of reporting, to have an active presence online, including on his official Facebook page. Same as Yusuf al-Qaradawi, banned from entering the United States, the United Kingdom, and France due to his declaration of support for suicide bombings and incitement of Islamist violence who still keeps, at the time of writing, his official Facebook page, as well as a few Facebook fan accounts. 

Preventing social media exploitation: public sector plans

The Impact Assessment[xlii] of the “Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online”[xliii] states that terrorist content online is a multifaceted security challenge due to a complex legal framework at the member state level. This situation is complicated by the fact that Article 3 of the EU’s 2000 e-commerce directive, created before the advent of peer-to-peer internet technology and social media[xliv], establishes the principle of the country of origin, which ensures that providers of online services are subject to the law of the member state in which they are established and not the law of the member states where the service is accessible.[xlv] However, the Directive on electronic commerce does not preclude a court of a Member State from ordering a hosting provider, such as Facebook, to remove identical and, in certain circumstances, equivalent comments previously declared to be illegal.[xlvi]

Anyway, State monitoring and flagging illegal content online are marred with difficulties. For instance, France’s most  important element in the fight against online radicalization and terrorist propaganda is PHAROS system (platform for harmonization, reports, analysis, and checking of digital content)[xlvii]. The platform, which now has 28 investigators (police and gendarmes), was established in 2009, for an initial investment of €100,000,[xlviii] recently proposed to be increased to €500,000,[xlix] within the central office for the fight against crime linked to information and communication technologies (OCLCTIC), placed within the sub-directorate of the fight against cybercrime of the central directorate of the judicial police. Investigators at PHAROS monitor various information and communication services in France and produced more than 228,000 reports in 2019.[l] Moreover, as part of a European Union-wide testing campaign, this unit notified Twitter, Facebook, and Youtube of 796 contents, of which 512 were withdrawn. Unfortunately, the 16 October Samuel Paty murder exposed many of the drawbacks of French and social media platform counter-terrorism efforts online. A student’s parent expressed via Facebook and WhatsApp his disapproval of Paty’s teaching methods and produced a video against him. The content was quickly disseminated online, but not flagged immediately,[li] even though Paty had filed a complaint to the police after he was made aware of threats coming from social media[lii] and an NGO reported the attacker’s Twitter account to authorities in July 2020.[liii]

In Austria, in the wake of the January 2015[liv] Charlie Hebdo attacks in Paris, the government announced the allocation of a €290mn plan to fight jihadist terror. €126m went into hiring new personnel with special skills, including specialists in cybersecurity, crime-fighting, and forensics; €34m targeting special IT technology upgrades, such as the Schengen Information System database and evidence collection software; €12m was allocated to either online or offline deradicalization efforts, including awareness education.[lv] In December 2020, the National Council passed a comprehensive legislative package, including the Communications Platforms Act and the Hate-on-the-Net Fight Act, which already passed in autumn 2020, to curb hate speech, threats, and other illegal content on large social media platforms such as Facebook. The majority of the legislative package takes effect on January 1, 2021, with social platform operators having until the end of March 2021 to implement the new protection measures.[lvi] In particular, the Austrian law is based on Germany’s Network Enforcement Act (NetzDG), which states that users notice potentially illegal content, report it, and platforms must then decide whether it is illegal, in which case it must delete the content within 24 hours of reporting. According to NetzDG, online platforms face fines of up to €50 million for systemic failure to delete illegal content[lvii]. Besides these measures and investments since 5 years, due to the sheer volume of content, there are no plans for preventive government control, thus courts will only be able to check afterward whether the platform has acted illegally.[lviii]

A multidisciplinary, long term, and cooperative strategy

So far, as mostly in every aspect of counter-terrorism, a multidisciplinary approach is the only way to understand the online extremist environment and effectively counter the spread of jihadist propaganda and detect dangerous subjects through social media. Cooperation through the acceptance of responsibilities between the public and private sector is the best method to counter the spread of terrorism online and create a resilient environment. To date, not all projects are frustrated over the lack of factual data.

At an EU level, together with Europol, providers of online services developed a database of hashes, allowing content identified as harmful to be tagged electronically, preventing it from reappearing. The database contains over 300,000 unique hashes of known terrorist videos and images.[lix] This made the Check-the-Web (CtW), accessible only to Law Enforcement: an electronic reference library of jihadist terrorist online propaganda. It contains structured information on original statements, publications, videos, and audios produced by jihadi terrorist groups and their supporters. An operational tool not only to identify new content, groups, or media outlets but also new trends and patterns in terrorist propaganda, as well as operational leads for attributing crimes to perpetrators.[lx]

With an annual budget[lxi] of about €150mn, an increased of over €62 mn since 2010,[lxii] of which roughly €1mn are spent on research and developments projects and €700,000 for the maintenance costs for Europol’s decryption platform[lxiii], Europol is succeeding in countering extremism online through repressive operations, analysis of the jihadist online environment, and cooperation with the private sector. For instance, the16th Referral Action Day, an operation that was joined by 9 online service providers as Telegram, Google,, Twitter, and Instagram, which pushed away from Telegram a significant portion of key actors within the Daesh network and, most importantly, established further cooperation with global private firms operating in the social media environment.[lxiv]

Europol, taken as an example, stresses the fact that, in terms of terrorist attack prevention, not in every case adding more data to the databases helps to find potential attackers. There is a lot of work that can still be done, at a public and private level, in understanding the online environment and all of its communication aspects, improving technology, investing in public awareness to report terrorist contents to authorities or online service providers, and investing in content moderators recruitment and training. In this framework, not only one actor could be held accountable. Each one could and can do something more by renouncing a little bit of ego to counter a wide-spread and still not effectively assessed threat.

[i] European Council. EU’s response to the terrorist threat.

[ii] European Council (November 13, 2020) Joint statement by the EU home affairs ministers on the recent terrorist attacks in Europe.

[iii] I. von Behr, A.Reding, C. Edwards, L. Gribbon (2013) Radicalisation in the digital era – The use of the internet in 15 cases of terrorism and extremism. RAND

[iv] W. Wiewiórowski (November 19, 2020) The Future of Encryption in the EU. ISOC 2020 Webinar.




[viii] A. Greenberg (October 10, 2020) Facebook Says Encrypting Messenger by Default Will Take Years. Wired.

[ix] Gazzetta Ufficiale (January 11, 2018) DECRETO LEGISLATIVO 29 dicembre 2017, n. 216.

[x] CNBC (May 5, 2017) Senator reveals that the FBI paid $900,000 to hack into San Bernardino killers iPhone.

[xi] (April 5, 2018) CEP To Facebook: Zuckerberg Must Explain Failure To Remove Extremist Content. Counter Extremism Project.

[xii] K. Sengupta (April 27, 2017) Last message left by Westminster attacker Khalid Masood uncovered by security agencies. The Independent.

[xiii] R. Musotto, D.S. Wall (December 16, 2020) Facebooks push for end-to-end encryption is good news for user privacy, as well as terrorists and paedophiles. The Conversation.

[xiv] M.Zuckerberg (March 6, 2019) A Privacy-Focused Vision for Social Networking.

[xv] I. Metha (October 31, 2019) Facebook is testing end-to-end encryption for secret Messenger calls. TNW.

[xvi] Z. Doffman (October 6, 2019) Here Is What Facebook Wont Tell You About Message Encryption. Forbes.

[xvii] J. Clement (November 24, 2020) Facebook: number of monthly active users worldwide 2008-2020. Statista.

[xviii] H. Abelson, R. Anderson, S. M. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, M. Green, S. Landau, P.G. Neumann, R.L. Rivest, J.I. Schiller, B. Schneier, M. Specter, D.J. Weitzner (July 7, 2015) Keys Under Doormats: mandating insecurity by requiring government access to all data and communications.

[xix] M. Zuckerberg (May 3, 2017)

[xx] O. Solon (May 25, 2017) Underpaid and overburdened: the life of a Facebook moderator. The Guardian.

[xxi] (May 24, 2017) How Facebook guides moderators on terrorist content. The Guardian.

[xxii] P.M. Barret (June 2020) Who Moderates the Social Media Giants? A Call to End Outsourcing. NYU Stern.

[xxiii] S.E. Garcia (September 25, 2018) Ex-Content Moderator Sues Facebook, Saying Violent Images Caused Her PTSD. The New York Times.

[xxiv] C. Newton (May 12, 2020) Facebook will pay $52 million in settlement with moderators who developed PTSD on the job. The Verge.

[xxv] Compound annual growth rate (CAGR) is the net gain or loss of an investment over a specified time period that would be required for an investment to grow from its beginning balance to its ending balance, assuming the profits were reinvested at the end of each year of the investment’s lifespan.

[xxvi] Cisco Annual Internet Report (March 9, 2020)

[xxvii] E. Dwoskin, N. Tiku (March 24, 2020) Facebook sent home thousands of human moderators due to the coronavirus. Now the algorithms are in charge. The Washington Post.

[xxviii] Q. Wong (June 19, 2019) Facebook content moderation is an ugly business. Here’s who does it. CNet.

[xxix] D. Gilbert (January 9, 2020) Facebook Is Forcing Its Moderators to Log Every Second of Their Days. Vice News.

[xxx] Cambridge Consultants (2019) USE OF AI IN ONLINE CONTENT MODERATION. Ofcom.

[xxxi] C. Jee (June 8, 2020) Facebook needs 30,000 of its own content moderators, says a new report. MIT Technology Review.

[xxxii] R. Levy (August 11, 2020) Facebook Removed Nearly 40% More Terrorist Content in Second Quarter. The Wall Street Journal.

[xxxiii] CEP Staff (October 12, 2020) Updated: Tracking Facebooks Policy Changes. Counter Extremism Project.

[xxxiv] D. Uberti (July 9, 2020) Why Some Hate Speech Continues to Elude Facebooks AI Machinery. The Wall Street Journal.

[xxxv] R. Gorwa, R. Binns, C. Katzenbach (February 28, 2020) Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Sage Journals.

[xxxvi] J. Vincent (February 27, 2019) AI wont relieve the misery of Facebooks human moderators. The Verge.

[xxxvii] J. Khan (November 19, 2020) Facebooks A.I. is getting better at finding malicious content—but it wont solve the companys problems. Fortune.

[xxxviii] K. Wiggers (November 13, 2020) Facebooks redoubled AI efforts wont stop the spread of harmful content. Venture beat.

[xxxix] Facebook AI (January 13, 2020) Online speech recognition with wav2letter@anywhere.

[xl] (April 5, 2018) CEP To Facebook: Zuckerberg Must Explain Failure To Remove Extremist Content. Counter Extremism Project.

[xli] A. Robertson (June 27, 2017) Terror suspect arrested in Birmingham and facing extradition to Spain is imam father-of-eight who preached to Bataclan bomber before Paris attacks. The Daily Mail.

[xlii] European Commission (September 12, 2018) COMMISSION STAFF WORKING DOCUMENT IMPACT ASSESSMENT Accompanying the document Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online

[xliii] F. Théron (March 2020) Terrorist content online Tackling online terrorist propaganda. European Parliamentary Research Service (EPRS)

[xliv] Organization for Security and Co-operation in Europe Office of the Representative on Freedom of the Media (October 15, 2020) LEGAL REVIEW OF THE AUSTRIAN FEDERAL ACT ON MEASURES TO PROTECT USERS ON COMMUNICATIONS PLATFORMS [KOMMUNIKATIONSPLATTFORMEN-GESETZ – KOPI-G]. OSCE.

[xlv] European Commission. E-Commerce Directive.

[xlvi] Court of Justice of the European Union (October 3, 2019) PRESS RELEASE No 128/19.

[xlvii] (04/02/2020) Lutte contre terrorisme – Moyens de lOCLCTIC. Assemblée nationale.

[xlviii] J.V. Placé (October 22, 2013) Police, gendarmerie: what investment strategy?. Sénat.

[xlix] Session of December 3, 2020. Sénat.|d48936220201119_6&_c=pharos&rch=ds&de=20191229&au=20201229&dp=1+an&radio=dp&aff=65702&tri=p&off=0&afd=ppr&afd=ppl&afd=pjl&afd=cvn

[l] B. Saragerova (November 29, 2020) France: Towards stronger counter-terrorism regulation online. Global Risk Insights.

[li] E. Braun, L. Kayali (October 19, 2020) French terror attack highlights social media policing gaps. Politico.

[lii] LCI (October 18, 2020) Pourquoi Samuel Paty n’a-t-il pas fait l’objet d’une protection policière?

[liii] A. Zemouri (October 17, 2020) Le père qui avait diffusé la vidéo hostile au professeur d’histoire en garde à vue. Le Point.

[liv] Parlamentskorrespondenz Nr. 152 (February 02, 2015) Nationalrat beschließt neues Islamgesetz. Österreichisches Parlament.

[lv] (January 21, 2020) Austria’s 290m plan to fight terror. The Local.

[lvi] Counter Extremism Project. Austria: Extremism & Counter-Extremism.

[lvii] CEPS Project. The Impact of the German NetzdG law.

[lviii] P. Grüll (July 4, 2020) Austrias online hate speech law prompts question marks about overblocking. EURACTIV.

[lix] European Commission. A Counter-Terrorism Agenda for the EU and a stronger mandate for Europol: Questions and Answers.

[lx] Europol (October 13, 2020) EU IRU TRANSPARENCY REPORT 2019.

[lxi] EU Budget 2020 – Europol Position Paper.

[lxii] D. Clark (October 12, 2020) Annual budget of Europol in the European Union from 2010 to 2020. Statista.