Image courtesy Congress is investigating how Twitter bots may have influenced the US election, by Quartz
Late 2017, the Twitter account of Groundviews was getting trolled – or in other words, had bitter invective levelled against it in a sustained manner – in an entirely new way. This piqued our interest. Since its inception in 2006, Groundviews has generated all manner of violent, venomous pushback and responses to content it has produced, published and promoted. Over the years, this feedback has ranged from threats of bodily harm and worse to, far more often, the most virulent of expletives. On the other side of the discursive spectrum, the site has also been extremely fortunate to host and feature considered, civil engagement including principled disagreement and coherently articulated alternative points of view.
With the first of its kind comment and content moderation policy in Sri Lanka, the worst commentary generated on the website never gets published – including in all instances bitter invective directed at the Rajapaksas, including the former President and his brother, the Secretary of Defence Gotabaya Rajapaksa, when and after they were in power. We have less control over social media, where through individual or institutional accounts, content published on Groundviews is repeatedly reacted to and often reviled. This pushback is now expected, and also comes from known quarters depending on the issues we cover. By known, it is not always the real, physical identity or geo-location of an interlocutor we refer to, but their digital avatar, which over time, comes to represent – almost like a signature or brand – a particular political ideology, worldview and bias. At Groundviews, the curators pay attention to this pushback because it is vital, as a media producer, to understand what triggers trolls and what their motivations are to the extent they can be discerned from online interactions, content and commentary. Knowing how and from where the worst pushback is likely to come from, over what issues and at what time, amongst other factors, is important when shaping a progressive content agenda and producing content that informs, influences and instigates democratic change, critique and contestation.
Trolls maketh a politican?
In the last quarter of 2017, pushback over Twitter to content Groundviews pushed out over the same platform came from sources not encountered or interacted with before. This piqued the interest of the site’s founding editor, Sanjana Hattotuwa, for one key reason. All the accounts publishing content against Groundviews were overwhelmingly promoting and partial to Namal Rajapaksa, a Member of Parliament and the extremely (social) media savvy son of the former President Mahinda Rajapaksa.
The discovery also came at a time when Groundviews was researching the weaponisation of social media to game, and ultimately undermine, democratic electoral processes. As the Economist recently averred in an exhaustive article looking the impact social media has on democracy and democratic institutions,
In 2010 Wael Ghonim, an entrepreneur and fellow at Harvard University, was one of the administrators of a Facebook page called “We are all Khaled Saeed”, which helped spark the Egyptian uprising centred on Tahrir Square. “We wanted democracy,” he says today, “but got mobocracy.” Fake news spread on social media is one of the “biggest political problems facing leaders around the world”, says Jim Messina, a political strategist who has advised several presidents and prime ministers. Governments simply do not know how to deal with this—except, that is, for those that embrace it.
This is now already a well-studied phenomenon even though there is no consensus as to what can be done about it. The issue is complex, involving governments, the UN and other international agencies, civil society, Silicon Valley companies, and a dark economy where actors, ranging from individuals to governments, are willing to pay whatever it takes to drown out, discredit, deny or decry anyone and any narrative that contests what they alone want kept alive online. Evidence of how social media was used to target and deviously influence voters in constituencies ranging from the US to the UK, France and Germany are a Google search away. What matters is not so much the technical details about how social media is weaponised, but the fact that in a country like Sri Lanka – where there is very high adult literacy and yet, extremely poor media and information literacy – what is promoted over social media is often what is trusted, shared widely and acted upon. This presents unique challenges for, amongst others, election monitoring bodies, which are traditionally geared to look at electoral malpractices at the point of exercising one’s franchise, violence that prevents or hinders this and malpractices during the collection, counting or the release of final results. The kind of threat social media that’s weaponised to promote a particular political ideology, idea, person, party or process is not something Sri Lanka’s government writ large, and in particular the Elections Department or any independent election violence monitoring body to date has even imagined, leave aside developed the technical capacity to monitor and address.
But how to make the connection with all this and the kind of Twitter accounts that were increasingly trolling Groundviews? One started with very basic analysis of what could be discerned about the Twitter accounts using nothing beyond tools and services openly available online. There was no hacking or doxxing (the often-vindictive publication of private information) involved. The date of creation on any public Twitter account can be gleaned from a number of web platforms. We used http://www.mytwitterbirthday.com. Accounts focussed on Twitter included,
Other accounts active at the time have since been deactivated or deleted. Some interesting patterns across these accounts emerge when their activity from inception is plotted, using the free http://www.tweetstats.com platform.
All the accounts were created around the middle of 2015, and rather active that year. The accounts then go mostly dormant over the whole of 2016. Almost all get re-activated around mid-2017. From then on, over the period they were monitored regularly till the end of 2017, all the accounts continued to publish content,
- against accounts that held the Rajapaksa family in general, and Namal Rajapaksa in particular, accountable for complicity in, amongst other things, corruption, violence, human rights violations
- originally tweeted or published by Namal Rajapaksa, basically acting to extend the reach and visibility of the original content by staggered re-publication
This was more than coincidental. It showed strategy and intent. This was a troll army – a group of individuals acting in concert to promote a single individual and by extension, a version of events as well as a way of framing things. A troll army works in two ways, both of which were evident here. It works to suppress, by bombarding accounts with responses, content that places in the public domain perspectives, narratives, testimony or facts contrary to what is established as a mainstream narrative. Secondly, it seeks to promote a specific viewpoint, idea or narrative by giving the appearance, on social media, that there is a wide recognition and acceptance of the original content by virtue of it being repeatedly republished, referenced, quoted and other highlighted.
The fact that these accounts were phony was cemented by a very simple investigation of profile images. Many take for granted that the profile image associated with an individual account on social media is in fact an image of the person who is the author of the content. This isn’t always the case, as most famously, the case of Amina Arraf, a self-described Syrian-American lesbian who blogged as A Gay Girl in Damascus showcased to the world some years ago.
Take for example the account of @dhinukas on Twitter. The profile image for ‘Dhinuka Silva’, according to TinEye – a search engine that locates original and duplicate images on the web – is from 2013 and a rather dubious sounding site, http://www.chicfactorgazette.com. The @NinaNajumudeen account is registered under the name Wahab Najumudeen. However, the photo is really of a Pakistani male model, taken in 2008 and posted on Flickr. The @RamananKpradeep account is ostensibly registered to someone called Ram K Prathaap. The photo is really of the well-known Indian actor, Kamal Hassan. Finally, @KandasamyMyu, an account that later came to our attention. The registered name is Myu Kandasamy. But the image is a stock photo of, as noted on Facebook, a Sri Lankan named Akila Wijeratne taken off Flickr, shot by the photographer Jeyakumaran Mayooresan, published in 2010. An account registered to Shiron Baskar, going as @BaskarShiron on Twitter, features a photo that is actually Baskaran Selvaraj, an Indian listed on LinkedIn and also featured in an article published in The Hindu newspaper. He has no connection whatsoever to Sri Lanka.
This was a clear pattern. The troll army retweeting and promoting Namal Rajapaksa’s Twitter account was overwhelmingly anchored to profile photos that were fake, and registered to names that deviously sounded like they were from the Muslim, Tamil and Sinhala communities, but were also fake.
From the accounts they followed to the way they tweeted, from how they republished specific content to the way they got activated over certain periods of time, from the devices used to the times at which the tweets were posted, there are other data driven determinants around these accounts, and others not included in this article, that strongly suggest an element of collaboration and coordination.
The issue with all this is that there is no easy way, without comprehensive access to all of Namal Rajapaksa’s communications from 2015 to the end of 2017, to determine with any degree of accuracy, collusion or collaboration between him and the trolls noted above. Without any data driven evidence or technical proof, Namal Rajapaksa can justifiably claim he is entirely removed from this censorious concert of troll accounts to quell dissenting, critical opinion and buttress his own content.
The story, at this time, was worth publishing if only to just get a response from Namal Rajapaksa. But we were sufficiently intrigued to dig deeper, especially since this level of sophistication around the strategic shaping of online commentary over and on social media was new and unprecedented. And that’s when things started to get even more interesting.
Betting on bots: Threats to democracy and timbre of public discourse
From the Philippines to India, from the US to Russia, troll armies are becoming an established feature of a censorious, authoritarian political culture. Troll armies are often state sponsored individuals, in their hundreds or thousands, that pseudonymously or anonymously attack, decry and seek to devalue the opinions of opponents, civil society and critics. This is now coupled with automated agents on social media platforms like Twitter, called bots. A bot is not necessarily a harmful or malicious technology. However, as with any technology, the more dangerous use comes in the form of specialised or purpose-built bots around, for example, elections. The international media is rife with examples around how bots have negatively influenced elections in the West – the debate is around the degree of influence, not around the indubitable impact bots have on voters. For example,
On 3 September, as German Chancellor Angela Merkel and her main opponent Martin Schulz faced off in an election debate that many viewers panned as more of a duet than a duel, a far livelier effort was underway on social media. People on Twitter started using the hashtag #verräterduell, which translates as “duel of traitors” and mirrors the claim by the right-wing Alternative für Deutschland party that both Merkel’s mainstream Christian Democrats and Schulz’s Social Democrats have “betrayed” the country.
Yet much of the venom may not have been fuelled by angry voters, researchers say. Instead it looks like the work of bots, or fake social media profiles that appear to be connected to human users, but are really driven by algorithms.
Keeping all this in mind and using an online platform called https://www.exporttweet.com, which cost US$ 69.99 paid using a personal credit card, the authors were able to access and download all of Namal Rajapaksa’s tweets, from the time he opened his account on 16 April 2013 to the day the archive was generated, on 16 October 2017. Download the tweet entire archive and the platform generated reports here. Just the topline analysis of all of Namal Rajapaksa’s tweets, generated by the web platform itself, indicated very interesting trends in how this account was being used.
At the time of publication, Namal Rajapaksa has around 224,000 followers. At the time of the research was conducted for this article late October 2017, he had around 199,600 followers.
At the outset, it is important to clearly note the difference between the troll / fake accounts noted above and Namal Rajapaksa’s own, personal, verified account (@RajapaksaNamal) is precisely that – whereas he can claim ignorance around the content and behaviour of external accounts, it is much harder if not downright impossible to say he is, to any degree, unaware of metrics around his own account including the names and number of followers generated, over a particular span of time. Any Twitter user, at any level of competency with the platform, new or seasoned, looks at and, in fact, is informed by the platform itself around new followers. It is not possible to manage a Twitter account without being aware of this. Furthermore, for any public personality or institution and particularly any politician, adding followers is a key goal of being on Twitter.
Of many other possible vectors of analysis, two key graphs stood out in the Executive Summary. One, the year Namal Rajapaksa’s followers joined Twitter. The other, the activity of his followers.
Note the dramatic increase in followers from 2016 to 2017. Even for a popular politician, which Namal Rajapaksa no doubt is, this increase is almost impossible to generate organically. Tellingly, more than half of Namal Rajapaksa’s followers were dormant, meaning they hadn’t actively used their accounts over the past year. This in turn suggested a very high probability that these accounts are activated on-demand. Another giveaway around the true nature of Namal Rajapaksa’s followers is how many followers they in turn have. At the time, @RajapaksaNamal had approximately 199,600 followers. Around 196,000 of these followers were only followed by less than 500, suggesting that they were not very active accounts, without any organic appeal or originality, were recently created or a combination of all this.
Since Groundviews didn’t want to trust the topline report produced by the web platform, and now had thousands of tweets produced by @RajapaksaNamal to examine for anomalies and patterns, we reached out to and enlisted the help of Yudhanjaya Wijeratne, whose credentials at number crunching, including around elections, is well-known. Yudhanjaya’s analysis is comprehensive and follows. At his request in order to conduct a more comprehensive analysis of the data, Sanjana used the same web service as previously noted to download another complete archive of Namal Rajapaksa’s tweets on 22 November 2017. The cost, the same as before, was also covered by personal credit card.
For the purposes of this analysis, we examined a dataset of 199,555 users – every single follower that Namal Rajapaksa had as of 09/2017. The dataset gave us each followers’ Twitter ID and URL, name, profile image URL, bio, and a few other variables indicating location, timezone, the date of account creation, the date of the last tweet, and the content of the last tweet.
The first order of business was to identify which parts of this data could prove useful. Location, which would have been ideal for pinpointing where the users were coming from, turned out to be a dud: less than a quarter of the users had location data, and even after using timezones to infer locations, we ended up with only 56,327 users – a significant reduction from our original data.
Furthermore, locations reported would range from country-level to village-level; Sri Lankan users would show up as variants of “Sri Lanka”, but also as “Colombo”, “thambuttegama,sri lanka”, “col-2”, “eswatte”, and other variants too numerous to accurately fold into a unified location within our timeframe. There’s even “Diagon Alley” and “India my love”.
For example, here’s the top 20 of users reporting in from various locations:
Sri Lanka 22415
(US / Canada) 1784
Sri Jayawardenepura 1336
sri lanka 857
Colombo, Sri Lanka 762
New Delhi 689
Sri lanka 426
Doha, Qatar 230
This is to be expected, and the alternate spellings confound the issue. We can say that at least half of the users with locations are from somewhere in Sri Lanka, but it’s not particularly useful. While I did build a map, it only represents some 11,000 users and is not to be considered representative; it’s more useful for seeing the spread of countries from which Namal Rajapaksa draws his following. Click here for live view or click on the image below for a higher resolution version.
And so we went back to square one and examined the other variables in the dataset. Interesting patterns show up in bios.
These one-word lists are literally the contents of bios. This is a dumb search (note that two variants of Students exist), and it is strange to see 370 strangers with an identical bio. This isn’t limited to one-word bios: there are ten bios that start with “Sports News Music Entertainment Lifestyle Technology”.
There are, overall, 1,269 bios that are “shared”. There are 6,920 users sharing them. While some bios seem natural (Hi?), some combinations are far-fetched, down to exact combinations of full stops and commas.
This seemed to indicate that at least 3.5% of Namal Rajapaksa’s Twitter following might be bots. This is not hard evidence of, say, bought followers; any Twitter account above a certain size tend to attract fake followers, and in Namal Rajapaksa’s case, it is a very small portion of his nearly 200,000 Twitter followership. Even if there is a large-scale bot operation at work here, it is a remarkably illiterate bot operation that cannot randomise bios; there are simple programs to string together random combinations of words; a competent botter should not be captured here at all.
What was more interesting were the creation dates.
This is a visualisation of the dates of creation of all the 199,555 accounts following him. As you can see, there is a large amount of accounts made in 2017. There are also large clusters of accounts made from 2014 to 2016, in a pattern that indicates a relatively steady series of creations throughout these years.
Click here for higher resolution version. 1,850 accounts were registered in 2009. 3,434 in 2010. 4,862 in 2011. 5,913 in 2012. 8,532 in 2013.
Namal Rajapaksa was elected to the Parliament in 2010, so this trend fits very well with his stay in office and growing power.
2014, however, sees 38,857 accounts made. 2015 sees 45,524. 2016 sees a lull – 25,354 accounts. 2017 sees a whopping 65,042 registered. This is unusual because it does not tie into the pattern of Namal Rajapaksa’s power in Parliament. Either Namal Rajapaksa is drawing tens of thousands of people onto Twitter – something that the platform should thank him for – or he may be plagued by some campaign that produces such fake accounts.
For the sake of comparison, we compiled a list of when Namal Rajapaksa made the news over 2017, off Google News. Access the document here. Clearly, the times at which Namal Rajapaksa featured in the headlines bears no correlation with the increase of followers on his account. Thus, we broke it down by months:
After 2015/02 the pattern of followers being created becomes remarkably predictable. Almost all of them are in the 3,500 range, give or take a few hundred: the mean value is 3,619 accounts created a month. This holds for an entire year to 2016/02. Then there is a dip; for seven months the figures enter the 2,000s range, and immediately afterwards ramps up to that same pattern. Then there is an explosion from 2017/07 onwards, where the numbers triple.
The pattern becomes even more obvious when we examine it by days. These are the results for 09/2017:
This is a snapshot of January 2016:
This is a snapshot of 07/2015:
The consistency is remarkable. These months are random picks. Each demonstrates a daily number of followers created. This figure rarely deviates wildly from the monthly average. It does not correspond to newsworthy events around Namal Rajapaksa (public appearances, controversies etc).
Since these aren’t people following Namal Rajapaksa, but actual accounts being created, there are only a few conclusions one can draw:
- Someone pointed to Namal Rajapaksa’s Twitter account and demanded a certain number of followers a month, checking back once a month or so to set that figure.
- This someone may be working for or against Namal Rajapaksa – sending fake followers is also a takedown tactic. That said, it is close to impossible that this sustained and significant increase in followers, and their dubious nature, would have gone unnoticed by Namal Rajapaksa over a period of at least one year.
- Namal Rajapaksa’s followers are so incredibly co-ordinated that they sign up for Twitter on some sort of quota system – only possible in a country with a reported 15.1% Internet penetration if someone sits down in front of a computer, logs in, and the line moves up one.
In any case, the data clearly suggests Namal Rajapaksa drawing a highly predictable number of followers on to Twitter every day. At this point in Twitter’s growth, this could be considered a public service.
Implications for democracy, electoral processes and public discourse
Jokes aside, in a Sri Lankan context, there is something very new about all this for politics, very interesting for the researcher and very disturbing for the health of electoral democracy, with just this one account. There may be other examples.
Twitter, in general, has a problem with bots. As this article on Mashable notes,
Twitter bots can be thought of as autonomous programs or entities that generate social content. Some of this content is harmless, like sports updates, and some of it intentionally malicious and polarizing — like the over 1,600 known bots that tweeted extremist right-wing views during the polarizing 2016 campaign, explored in a recent report from Bloomberg.
The influence of bots is strong, and much of this strength comes from sheer numbers. Earlier this year, researchers from the University of Southern California and Indiana University suggested that between nine and 15 percent of Twitter users are actually bots. Twitter has around 328 million users globally, so even if the low estimate is taken, that’s 30 million bots.
In September 2017, Twitter itself published a detailed note on how bots, misinformation and state-level interference, by Russia, adversely impacted the US Presidential Election. In the note, Twitter avers that they,
…engage with national election commissions regularly and consistently bolster our security and agent review coverage during key moments of election cycles around the world. We will continue to do this, and expand our outreach so that we’ve got strong, clear escalation processes in place for all potentialities during major elections and global political events.
It may well be the case that Namal Rajapaksa is a leading magnet in Sri Lanka for Twitter bots. This however is highly unlikely, given the data driven analysis above, which shows intent from the account holder and owner, Namal Rajapaksa, to buttress followers in a planned, strategic manner. It is also highly unlikely this is the only example in Sri Lanka’s rich, varied and rapidly growing social media landscape, which includes platforms outside of Twitter like Facebook, Instagram and increasingly, instant messaging apps like WhatsApp and Viber. Both authors, independent of each other, followed and studied the #genelecsl hashtag on Twitter primarily during the General Election of August 2015, and came up with similar findings over the weaponisation of accounts to promote misinformation, rumour, half-truths and divisive propaganda (read Sri Lanka Parliamentary Election 2015: How did Social Media make a difference? by Nalaka Gunewardene and Archives of General Election 2015: #SLGE15 & #GenElecSL to access all the tweets). Clearly, three years hence, what analysts observed then as embryonic strategies to influence voters and shape electoral outcomes would now be more widespread and normalised.
What’s interesting for social media research is the manner in which the @RajapaksaNamal account on Twitter is used, or arguably, abused. It reflects a new appetite for social media strategies specifically engineered for electoral gain amongst all politicians, and not just the Rajapaksas and Joint Opposition, involving human trolls as well as automated bots. The intent is clear – to influence voter perceptions and public discourse, over and beyond social media.
This is also what makes this account a possible harbinger of things to come, hitherto unseen in our country’s electoral framework and related public discourse. The media strategy is obviously anchored to young, impressionable, new and social media savvy voters. First time voters in February 2018 will numbers around 900,000, according to the Department of Elections. From 2010’s Presidential Election in January that year to the local government election next month, around 15% of the total number of eligible voters in Sri Lanka will be between the ages of 18 to 34. How this demographic generates political opinion, shares it, seeks to influence peers and pegs trust in ideas as well as personalities, including politicians, is fundamentally different to an older, less new media savvy demographic. This millennial bulge – if one can call it that – requires political parties to adopt new strategies, and adapt older propaganda, in the capture of votes. It is clear that Namal Rajapaksa’s Twitter account, and by extension, his (social) media strategy, is ahead of everyone else in this regard.
Finally, as noted at the start of this article, the danger around the weaponisation of social media around electoral processes is that neither government nor civil society is prepared to deal with it. They do not have the imagination, technical expertise, trusted advice, and in the case of civil society, funding, to tackle these new, potentially outcome-shaping technologies and their use as well as misuse.
The problem with bots is that they create false impressions around trends, and ultimately, truth. Bots can serve to undermine core issues, undermine opponents, amplify partisan content and propaganda, and coupled with human actors (troll armies) become online what a gang of thugs often do to unarmed supporters of a rival political party – or in other words, the violent suppression of political dissent.
As Twitter is only now admitting, the problem with bots around the US Presidential Election was significant.
The company also said there was way more Russia-linked bot activity during the election, too. Previously, Twitter reported that some 36,000 bots were sending out automated tweets in the short period of time it analysed surrounding the election, but it now says that number was a low-ball. Twitter found an additional 13,500 bots, kicking that number to 50,258. That’s a lot of bots. Twitter’s notification efforts don’t include those who may have interacted with these accounts—a number that’s likely far higher than 677,775.
This is not just something that is of relevance to a programmer, coder or geek. The way social media is now weaponised impacts and matters to all citizens – those connected, those not connected, first time voters and older voters. The goes to the heart of any electoral process, in any country. As Wired magazine noted in 2017,
All this bot activity could be changing your perception of the election. And that’s the point. This is a propaganda war.
The the same article went on to note that,
There’s virtually no way to figure out who creates these bots, says Philip Howard, another of the Oxford University bot researchers. That’s the whole point of bots—the actors responsible want to spread a message broadly, but don’t want that message to be traceable to an identifiable source. “There’s some evidence that the political action groups are behind some of the bots—we know that they spend money in the direction of the candidates they’re supporting,” says Howard. “But Twitter bots are also unique in that it’s possible for pretty average users to generate them.” Content and advertising shops have long used bots, Howard says, and there are many vendors that sell them cheaply by the thousands to any buyer who wants to set them loose. And this is all legit because currently, there are no rules for bot activity overseen by the Federal Elections Commission, or any other government agency.
Emphasis ours. The US Elections Commission wasn’t prepared for the sophistication of Twitter bots and the sheer volume of content they generated, which had an impact on the electoral process. It is clear that the astronomic growth of just Namal Rajapaksa’s Twitter account over 2017, populated by bots, is indicative of how the account (no doubt in concert with others, and also over other social media platforms) will be leveraged, at some point of time, to influence electoral outcomes in 2018 and beyond. As far back as 2014, politicians like Wimal Weerawamsa in Sri Lanka were observed artificially inflating their followers on social media. Whereas at the time this would have been to just generate numbers to overshadow social media accounts of political rivals, what is now a danger is that the followers (in the form of bots and trolls) can also be strategically leveraged to quell dissent, shape narratives, highlight propaganda, spread misinformation, drown out critical voices, bully, act as echo chambers and shape social media discourse.
We already have indications around how all this will evolve. Namal Rajapaksa’s approach to public discourse and accountability mirrors the Rajapaksa family’s approach writ large in dealing with the past – delete when inconvenient and carry on as if nothing happened. The most recent and blatant display of this was through the deletion of an inconvenient tweet objectifying women after it was flagged by Groundviews in response to a tweet pontificating on the need to deal with violence against women.
Typical. After pontificating on violence against women, @RajapaksaNamal, as we suspected, has just deleted denigrating tweet from 2014 we flagged this morning. Yet another example of Rajapaksa family's approach to dealing with past! #lka #srilanka #vaw pic.twitter.com/T25MfdwwRv
— Groundviews (@groundviews) January 20, 2018
This attitude and behaviour are clear indications of Namal Rajapaksa’s mentality. By extension, it gives an insight into how over 224,000 followers, at the time of writing this piece, can be employed and deployed around elections, starting with February’s local government election.
Without sounding alarmist, Sri Lanka has already entered a new online political dynamic, in which the discursive landscape is governed agents of censorship, manipulation and control outside the parameters of traditional observation and analysis. This isn’t just a technocratic concern. As the United Kingdom’s Elections Commission in late 2017 flagged, there is a need for a more robust conversation around transparency and accountability in the use of social media platform for election campaigns.
It is unclear, at the time of writing, whether Sri Lanka’s Elections Commission, its constituent members and affiliated election violence monitoring bodies understand the importance of these developments. And if they don’t, the awareness raising and education (young) voters need in order to not be taken in by sophisticated online propaganda cannot take place. Over time, and not just for those directly connected to social media, this places Sri Lanka’s entire electoral system at risk of being gamed by a few, for parochial, partisan gain. Ultimately, it is not about Namal Rajapaksa or social media – it is about the health of our democracy and the integrity of our electoral systems. Protecting and strengthening both requires us all to engage with – as policymakers, politicians, leaders and citizens – the ways through which content seemingly the least harmful and most trustworthy can hold our democratic potential hostage, without us even realising it.
 Today’s bots can help us order food, shop for clothes, save money and find restaurants. For example, Digit helps you manage your money by showing your bank balance, upcoming bills and helping you save money through text messages. The Hi Poncho chatbot available in Facebook Messenger tells you the weather around you. Via What is a bot? Here’s everything you need to know, https://www.cnet.com/how-to/what-is-a-bot/, published on CNET.
 Social media ‘bots’ tried to influence the U.S. election. Germany may be next, http://www.sciencemag.org/news/2017/09/social-media-bots-tried-influence-us-election-germany-may-be-next
The first responses from the fake Twitter accounts identified in this piece strengthened what we have highlighted. See larger version of image embedded in the tweet below here.
Fascinating first responses from #troll army aligned to @RajapaksaNamal around our exposé. From 12:02 to 12:21, all tweets from web client, in quick succession, over multiple accounts. Classic Twitter troll behaviour. @yudhanjaya #lka #srilanka pic.twitter.com/q0tsWgs9Wr
— Groundviews (@groundviews) January 24, 2018