smartphone cop

Tolling can be usefully defined as the targeting of defamatory and antagonistic messages towards users of social media (Williams and Pearson, 2014:4) (37). Trolling broadly understood includes: cyberbullying; cyberhate; cyberstalking; cyberharassment; revenge porn; sextortion; naming and shaming; and flaming. Cyberhate is trolling targeted towards those with minority status.38 The term ?flaming? is used by different actors to refer to different activities: some use it to refer to extremely provocative language, designed to start a fight (Hardaker 2010; Herring et al. 2002; Williams and Pearson, 2014:11); others use it to refer to those who post offensive or provocative material for their own entertainment or gratification (Bishop, 2013). Most of these acts take place within a context of wider abuse, including offline, but some trolls, in particular those who belong to the sociologically distinct group of individuals who selfidentify as trolls, carry out the vast majority of their abuse online. The legal status of all the trolling-related acts just listed differs from jurisdiction to jurisdiction and from act to act. An impressive and varied array of preventive, counter-trolling, and criminal justice approaches to combatting trolling are currently employed by public security providers broadly understood.

Who are the Trolls?

Our ability to generalise with respect to this question is limited by the paucity of empirical studies undertaken outside of North America. However, the results of the few examples of existing research are both interesting in themselves and may prove indicative of trolling trends internationally.

?Subcultural trolls? or those trolls who explicitly identify with the term ?troll? to describe their online identity, are distinct in that they tend to be indiscriminating in their choice of targets, and that they are motivated by a desire to get kicks, or ?lulz?, a kind of enjoyment in the face of the suffering of others (Phillips, 2015). That same study hypothesizes that trolls enjoy economic comfort, as their activities require a reliable internet connection, a personal device, and a private space from which to troll; that they are white; and that they are between the ages of 18-30 (ibid, p.53-4).

In 2014 Canadian scholars undertook a psychological study of individuals selfidentifying as internet trolls (5.6% of respondents to their initial survey). They found significant correlation between subcultural trolling and narcissism, psychopathy and Machiavellianism and a strong correlation between trolling and sadism (Buckels et al., 2014). They also claim that trolls engage in antisocial behaviour offline as well as online, a fact that could be significant for LEAs developing strategies to deal with antisocial behaviour offline.

A separate category is that of politically and morally-motivated individual perpetrators of online aggression, many of whom, unlike subcultural trolls, are motivated not by enjoyment but by a desire to highlight and oppose what they see as immoral behaviour. These forms of aggression are directed typically at public actors such as politicians who disregard norms of political correctness, corporations that violate human rights, or academics who violate scientific norms by engaging in plagiarism. A recent study of online aggression of this sort on political sites found that, contrary to common assumption, anonymity did not increase online aggression (Rost et al, 2016). Rather, a greater proportion of aggressive posts were made by non-anonymous than by anonymous commenters.

This should be distinguished from a further category of politically-sponsored trolling, namely that of organised groups representing political agents (sometimes even national governments) who use aggressive and defamatory language to counter and harass those who criticise the positions of those agents. Examples include the trolling by professional pro-Russian commenters of political threads on journalistic media sites during the Ukraine crisis of 2014 (39) Those who post abusive content online are as likely to be women as they are to be men. Challenging commonly-held assumptions about the behaviour of different genders, a recent study, which provided a snapshot of misogynistic internet trolling globally on Twitter over 3 weeks, found that 50% of those tweeting aggressive and misogynistic messages are women (40).

Some trolls are bots. Many troll bots are employed for marketing purposes. The numbers of aggressive or offensive troll bots operating online is unknown. Little research exists on the identities of the agents funding and coordinating such bots. But the extent of bot use is impressive: research from 2013 suggests that 9% of Twitter accounts are fake and that many of these are bots.41

Who are the targets and victims of trolling?

While anyone can become a target of trolling, victims of abusive trolling online include in particular, young people, women, LGBT people, ethnic minorities and celebrities or anyone with a public-facing role. A US-based Pew Research Center survey published in 2014 found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they?d been stalked online.

Where does Trolling occur?

All forms of trolling and online abuse described here happen on all kinds of regular social media sites as well as on specialist online forums. They occur in particular as well in online spaces that exhibit technological features which make them attractive sites for positing illicit material. For example 4chan/b/ deletes most threads in a matter of minutes, while anonymity-granting tools like TOR make the dark web a hotspot of?abusive sites (Jardine 2015). An estimated 40 revenge porn websites exist in the USA, enabling the posting of explicit online material and then charging a fee to remove material, thus exploiting the position of victims for profit (Citron in Think Progress).42 Apart from attractive technological features, some sites are targeted for their high shock-and-outrage value. For example, memorial pages on Facebook are heavily targeted by trolls seeking to upset grieving friends and relatives (Phillips, 2016).

Which actors have a role in countering trolling through social media?

Front running public security planners in counter-trolling are the UK (Individual/regional police forces; central government; local government; national LEAs e.g. UK?s National Crime Agency). Businesses (telecoms and social media providers) and campaign groups also feature significantly in efforts to counter this activity. Parents and schools have a special role to play in educating young users of social media. The press and online journalistic media has an equally important role in raising awareness of trolling and how victims can protect themselves. Self-styled ?troll hunters? often work in tandem with journalists to identify and expose trolls. Public figures such as celebrity victims of trolling can also use their position as role models or their public platform to play an important role in this respect. Finally, social media providers and the communities that use them are uniquely well-placed to counter trolling.

How is social media used by public security actors to counter trolling?

Social media providers have a range of digital techniques for detecting illegal content and blocking it, but they also make use of codes of conduct and community rules to promote respectful communication on their platforms as well as educational material. Police use social media as a source of evidence and intelligence of trolling, and also as a tool of communication with which to inform and educate the public. A range of NGO and campaign groups develop tools and sites to educate people about the risks of social media use.43 Possibly because of their relative lack of power, actors who are not public security authorities sometimes use the techniques used by trolls in order to identify, combat, and punish trolling. Thus, in Sweden, Germany and the USA some groups find and expose the identities of, thus naming and shaming, those posting online racism and hate speech (Chen, 2014; De Vries, 2016; Webb et al, 2016). Similarly, counter-speech is employed by online communities to challenge offensive trolling.

The influence and impact of social media use in the domain of public security and policing

Impacts on individuals of cyberbullying; cyberharassment and cyberstalking Cyberbullying can be more harmful to people than offline bullying because it can occur 24 hours a day, 7 days a week and can be experienced each time a person goes online. Specific psychological effects cited in the literature include: isolation and relationship problems; fear; depression; loss of confidence; self-harming; suicide (Various, cited in?Williams and Pearson, 2014) loss of employment, paranoia (Citron and Franks, 2014) and serious harm to a person?s reputation (Webb et al, 2016).

Impact of revenge porn, sextortion; sexual naming-and-shaming:

A US study suggests that 1/10 former partners threatens to post explicit images of exes online and of those who threaten an estimated 60% follow through (Dawkins, 2014) The impact of these forms of online abuse is gendered, because women and girls tend to be the victims (Ibid.). In the Netherlands, a 13 year old girl committed suicide after her name was published on a ?banga list?, a list of girls identified maliciously by local boys as being sexually promiscuous.44 The risks of sexting in this context are made clear by a study by the UK Safer Internet Centre, which showed that 88% of sexting pictures or videos are put on other websites, sometimes even on porn sites.45

Impact of online hate towards minority/disadvantaged groups

  • Women: Though some research suggests men are subject to more mild forms of online harassment, women suffer most persistent and harmful abuse (Pew, 2014). UK research from 2016 showed that, over a three-week period 80,000 twitter users were targeted for aggressive misogynistic abuse (DEMOS).
  • LGTB community: In Russia social media is used to entrap LGBT individuals into meeting groups of self-proclaimed ?paedophile hunters? who video the meetings which are used to humiliate and shame them online (De Vries, 2014).
  • Trans people:?UK research shows that trans people are significantly more likely than non-trans LGB people to have been a direct victim of hate crime involving online abuse. Anti-LGBT hate crime is highly repetitive in nature for trans people, meaning that most trans individuals experience multiple incidents of abuse each year.46
  • Black & Minority Ethnic Groups: Racist abuse online is widespread. Islamophobic hate speech is a category of specific concern. A recent report on Islamophobic hate speech in the UK found that victims of online abuse were afraid that the abuse would migrate offline into violence, especially if their online profiles were public.47


37. Relatively innocuous forms of trolling also exist, like the use of troll bots programmed to deliver marketing messages automatically on behalf of companies or the use of social media to deliver mocking and sometimes infantile but mostly humorous messages. This summary addresses forms of trolling that challenge or undermine public security.

38. See for example, Creese, B and Lader, D (2014) ?Hate Crimes, England and Wales, 2013/14? Home Office Statistical Bulletin. At

39. Rand Corporation Report (2016) ?The Russian Firehose of Falsehood Propaganda Model: How it Works and Options to Counter it?, at See also, Elliott, C. (2014) Pro-russia trolling below the line of Ukraine Stories? The Guardian Newspaper, 14 May, at https:

40. Centre for Analysis of Social Media (2016) The Scale of Online Misogyny?, at:

41. NBC News (2013), ?1 in 10 Twitter Accounts is Fake, Say Researchers? Nov 25th, at

42. Danielle Citron is quoted in an ?The Trolling Epidemic, Quantified? on the site Think Progress, October 23rd, 2014, at:

43. See, for example, a Dutch site eduating young people about the risks of sexting


45. Further statistics illustrating the risks in the USA are given in a report by a campaign called ?the national campaign?, – 21% of teen girls and 18% of teen boys have sent/ posted nude or semi-nude images of themselves. – 61% of all sexters who have sent nude images admit they were pressured to do it at least once. – 15% of teens who have sent nude pictures of themselves, send these images to people they have never met, but know from the internet. – 17% of sexters share the message they receive with others, and 55% of those share them with more than one person.

46. VKontake article on LGBT Hunters in Russia, at

47. TellMAMA (2015) ?We fear for our lives: offline and online experiences of anti-muslim hostility? at

Read or download the whole?[email protected] Report: The Emerging Role of New Social Media in Enhancing Public Security State of The Art Review:


Related Posts:

Tagged with →  

Leave a Reply

Your email address will not be published. Required fields are marked *