Social bots and synthetic interactions to stage digital extremist armies (part 1) – by Daniele M. Barone

On June 16, the European Commission welcomed the strengthened Code of Practice on Disinformation, a framework to set out commitments by platforms and industry to fight disinformation.[i] The first 2018 anti-disinformation Code consisted of self-regulatory standards to fight disinformation to which tech-industry representatives agreed voluntarily. Amid its measures, it includes interventions to prevent malicious actors from covering manipulative behaviors used to spread disinformation using fake accounts, deepfakes, and bot-driven amplification.

In this respect, the use of artificial intelligence (AI), even at its rudimentary level, is creating a growing interest in its possible exploitation not only to spread disinformation but also for extremist or terrorist purposes.

In particular, the creation of AI-made fake accounts and bots for social media platforms are becoming increasingly sophisticated and able to impersonate average users[ii] and, as AI keeps advancing, also terrorist organizations will benefit from these technological developments to increase the efficiency of their use of social media.

To define in which ways AI-bots developments intersect with terrorist or extremist communication environments, the analysis will first understand how bots work, in which ways social bots could help perceive synthetic interactions as authentic interactions and their potential contribution to social manipulation.

A how-to guide to normalize “synthetic realness”

Even though AI is not a new technology, only in the last decades it has shown its impact on businesses and people’s everyday lives, making it hard to discern where it stops and humanity begins.[iii] To better understand the pervasiveness of AI in the digital communication environment, it is useful to highlight some major areas in which its acceptance degree and uses are evolving.

  1. A growing, promising business

The potential pervasiveness of the whole AI sector is allowing the AI market, which was valued at USD 65.48 billion in 2020, to be projected to reach USD 1,581.70 billion by 2030.[iv] Indeed, AI-enabled systems will continue to support many sectors, for instance, healthcare, education, financial services, engineering, security, and transport, and are already changing the way businesses understand both internal and external processes.

  1. Big (artificial) data to kickstart the modern economy

AI is changing the foundation of the modern economy: big data.[v] AI systems work by combining large sets of data with intelligent, iterative processing algorithms to learn from patterns and features in the data that they analyze, allowing machines to learn from experience, adjust to new inputs and perform human-like tasks.

To train AI, companies used to rely exclusively on data generated by real-world events until they realized there wasn’t enough data to support the algorithm’s training.[vi] This limit brought to provide synthetic data, which consists of a technology that enables to digitally generate the data, on demand, in whatever volume, and artificially manufactured to precise specifications.

This approach helps to bypass, for instance, confidentiality and privacy issues when gathering data to train AI for healthcare purposes, detect specific and rare patterns in credit-card frauds, and generate data required to build a safe autonomous vehicle.[vii]

  1. Synthetic authenticity becomes the new real

With these premises, the widespread implementation of AI has brought either developments or new challenges in business, organizations, and society at large, in an ongoing process of creation of a world of synthetic realness, where AI-generated data convincingly reflect the physical world.[viii] In this context of blurred divergences between synthetic and real, the most evident direct interaction with AI is in the intersection between technology and communication through the use of bots in the now-familiar social media/chat services environment.

A bot is a software agent or third-party service programmed to perform certain actions on a regular or reactive basis, without having to rely, or partially relying, on human intervention. The bot analyzes the circumstances and autonomously decides what action to take. In particular, a social bot can mimic human behavior in social networks, taking part in discussions, pretending to be a real user. Social bots can post content, mostly through fake accounts, like, share, and comment.

Bots require a low level of human management: hundreds or even thousands of social bots can be managed by a single person.[ix] In that regard, it has been roughly estimated that, only in 2017, there were 23 million bots on Twitter (8.5% of all accounts), 140 million bots on Facebook (5.5% of all  accounts), and about 27 million bots on Instagram (8.2% of all accounts).[x]

  1. The manipulative side of social bot

The use of social bots to manipulate is not new but has been spreading faster over time.

For instance, studies claiming that the key to the success of Farage Brexit party is related to the clarity and simplicity of its messaging, compared to Change UK, the Greens, and the Liberal Democrats, and by an effective social media echo chamber of pro-Brexit bot accounts.[xi] In this respect, a study in the Social Science Computer Review uncovered the deployment of a network of 13,493 Twitterbots that tweeted mainly messages supporting the Leave campaign, that were deactivated or removed by Twitter shortly after the ballot.[xii]

In most recent times, a study by Carnegie Mellon University[xiii] on more than 200 million tweets discussing coronavirus from January to May 2020, found that about 45% of tweets on Covid were posted by accounts that behave more like computerized robots than humans, spreading more than 100 false narratives about the virus.[xiv]

Nowadays, using recent developments in AI, it is possible to unleash human-like crowds of social bots, in coordinated campaigns of deception and influence[xv] fueled by bots socialization with humans for attention, information, and money.[xvi] With advancements in natural language processing (NLP), a branch of AI that helps computers understand, interpret and manipulate human language,[xvii] bots can learn over time on the basis of their interactions with social media users, enabling them to respond in a manner that better resembles a human.[xviii]

Nevertheless, besides cutting-edge technologies or the exploitation of rudiments of AI, the manipulative use of social bots is a consequence of real people behavior and choices;[xix] from programming, spreading manipulative content, and influencing communication exchanges on polarizing topics,[xx] to choosing to believe those contents.

Then, the malicious use of social bots needs to be contextualized in the ideology of terrorist or extremist groups. Thus, the next step of the analysis will be to outline how these actors have already and could further empower the consolidated acceptance of human-bot interaction through their ideology.

 

[i] European Commission (June 16, 2022) Disinformation: Commission welcomes the new stronger and more comprehensive Code of Practice on disinformation. https://ec.europa.eu/commission/presscorner/detail/en/IP_22_3664

[ii] Kilcher Y.  (June 3, 2022) This is the worst AI ever. YouTube. https://www.youtube.com/watch?v=efPrtcLdcdM

[iii] Forsbak Ø. (March 25, 2022) Six AI Trends To Watch In 2022. Forbes. https://www.forbes.com/sites/forbestechcouncil/2022/03/25/six-ai-trends-to-watch-in-2022/?sh=3e1c36e62be1

[iv] PR Newswire (June 13, 2022) Artificial Intelligence Market USD 1,581.70 Billion By 2030, Growing At A CAGR of 38.0%. Bloomberg press release.  https://www.bloomberg.com/press-releases/2022-06-13/artificial-intelligence-market-usd-1-581-70-billion-by-2030-growing-at-a-cagr-of-38-0-valuates-reports#:~:text=Artificial%20Intelligence%20Market%20USD%201%2C581.70,38.0%25%20%2D%20Valuates%20Reports%20%2D%20Bloomberg

[v] The Economist (May 6, 2017) The worlds most valuable resource is no longer oil, but data. https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data

[vi] Castellanos S. (July 23, 2021) Fake It to Make It: Companies Beef Up AI Models With Synthetic Data. Wall Street Journal. https://www.wsj.com/articles/fake-it-to-make-it-companies-beef-up-ai-models-with-synthetic-data-11627032601

[vii] Towes R. (June 12, 2022) Synthetic Data Is About To Transform Artificial Intelligence. Forbes. https://www.forbes.com/sites/robtoews/2022/06/12/synthetic-data-is-about-to-transform-artificial-intelligence/?sh=5e6a76c07523

[viii] Accenture (June 17, 2022) The unreal – making synthetic, authentic. https://www.accenture.com/th-en/insights/health/unreal-making-synthetic-authentic

[ix] Cloudfare. What is a social media bot? | Social media bot definition. https://www.cloudflare.com/it-it/learning/bots/what-is-a-social-media-bot/#:~:text=Experts%20who%20have%20applied%20logarithms,designed%20to%20mimic%20human%20accounts

[x] Vosoughi S., Roy D., and Aral S. (March 9, 2018) The spread of true and false news online. Science. https://www.science.org/doi/10.1126/science.aap9559

[xi] Savage M. (June 29, 2019) How Brexit party won Euro elections on social media – simple, negative messages to older voters. The Guardian. https://www.theguardian.com/politics/2019/jun/29/how-brexit-party-won-euro-elections-on-social-media

[xii] Bastos M.T., Mercea D. (2017) The Brexit Botnet and UserGenerated Hyperpartisan News. Social Science Computer Review. https://journals.sagepub.com/doi/10.1177/0894439317734157

[xiii] Allyn B. (May 20, 2020) Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots. NPR. https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots?t=1655890800628

[xiv] Roberts S. (June 16, 2020) Whos a Bot? Whos Not?. The New York Times. https://www.nytimes.com/2020/06/16/science/social-media-bots-kazemi.html

[xv] Terrence A. (June 2017) AI-Powered Social Bots. https://www.researchgate.net/publication/317650425_AI-Powered_Social_Bots

[xvi] Liu X. (April 2019) A big data approach to examining social bots on Twitter. Journal of Services Marketing. https://www.researchgate.net/publication/332331554_A_big_data_approach_to_examining_social_bots_on_Twitter

[xvii] Natural Language Processing (NLP) SAS. https://www.sas.com/it_it/insights/analytics/what-is-natural-language-processing-nlp.html

[xviii] United Nations Interregional Crime and Justice Research Institute (UNICRI) and the United Nations Office of Counter-Terrorism (UNCCT) (2022) Algorithms And Terrorism: The Malicious Use Of Artificial Intelligence For Terrorist Purposes. https://unicri.it/News/Algorithms-Terrorism-Malicious-Use-Artificial-Intelligence-Terrorist-Purposes

[xix] CITS How is Fake News Spread? Bots, People like You, Trolls, and Microtargeting. https://www.cits.ucsb.edu/fake-news/spread

[xx] Chen W., Pacheco D., Yang K., Menczer F. (September 22, 2021) Neutral bots probe political bias on social media. Nature. https://www.nature.com/articles/s41467-021-25738-6