The hype around social bots: Is it justified?

 “Most bots are still relatively stupid.”

Jörg Strebel from the startup BotStop does not believe that voters’ decisions in Germany are significantly influenced by social bots. According to him, the actual risks can be found elsewhere.

Social bots were a ‘hot topic’ before the parliamentary elections.

However, very few people have a clear understanding of the phenomenon.

Could you explain exactly what bots are?

A bot is a piece of software that can independently carry out tasks on the Internet. Most commonly, bots are associated with actions for which humans would be frowned upon in real life.

For example, the programs automatically click on online advertisements. Or they bombard other servers with network accesses. These servers are then so overloaded or occupied that normal users cannot access them.

And yet bots can also do meaningful things. In London, for example, there are bots that help  automatically object to parking violation tickets that were issued wrongly. They practically replace a lawyer.

The current public debate, however, mostly concerns social bots. These are automatically and centrally managed agents within social networks that publish or share masses of posts. In general, they do not occur as only one exemplar, but in the form of a botnet, which is a whole network of robots.

The cost of creating a bot is low, which is why it is easy to create entire bot armies. Our own analyses show that between 7% and 15% of the users on Twitter are bots.

What are the objectives of people who create bots?

Currently, most bots are used for advertising purposes. In these cases, they are very simple programs which, for example, have the task of spreading advertising slogans using specific hashtags. If the hashtag trends, everyone reading the associated tweets also sees the advertisement, even though it might have nothing to do with the topic of the hashtag.

In addition, click fraud is widespread: bots repeatedly click on online advertisements until the limit of a company’s budget assigned for advertisement is reached. Since companies pay for every click a user makes, this practice causes companies to pay for advertisements without having gained real customers for their products.

In the U.S., many social bots have political objectives. During the months before and during the presidential elections, there were many automated accounts whose only task was to search Twitter for specific key words expressing criticism of Trump and retweet them.

What are your observations shortly after the parliamentary election in Germany? Were such activities equally common here?

There were really masses of tweets about Trump. We could not detect similar trends for Germany, even though the technical capabilities are the same.

We have analysed hashtags concerning the parliamentary election, and we found that the AfD was definitely a topic which generated much attention. Hashtags about topics close to the AfD were indeed frequently used. However, when we looked at those tweets more closely, we noticed that many of them were official announcements of that political party. They wrote about their electoral programme or about some events. Personally, I did not have the impression that automation played a particularly important role in this case.

Michael Fries and Jörg Strebel from BotStop

Does this mean that social bots do not have a significant societal impact here in Germany?

At the moment, everything is still rather harmless regarding politics, even though some social media posts can be very extreme. I cannot imagine anybody voting for the AfD just because he saw tweets posted from bots that all point in one specific political direction.

I think instead what happens is a reinforcing cycle: Somebody already has certain political views, and due to the algorithms on Twitter or Facebook, he then only gets to see tweets which further reinforce this opinion. This can have an isolating effect, strengthening the formation of a specific opinion.

This does not mean, however, that bots aren’t a danger. They can generate highly polarised discussions. Certain opinions or even fake news can multiply and be further disseminated through them.

As a consequence, filter bubbles are created, and we only see content confirming our own worldview. The automated sharing of posts, in turn, influences what news social networks show us. And they have an effect on us: If we see the same message over and over again, we are more likely to believe it, especially if a different “person” brings us the news each time.

It may also happen that one day we will only communicate with prefabricated text blocks and messages on social media. We will then no longer be able to differentiate between who is a bot and who is a real person.

How can bots be recognised?

At the moment, most bots are still relatively stupid. Technically, they are simple; they function according to rule-based algorithms. They were taught that, if this or that happens, you do the following. When I tried to talk with some of them, I barely got any reaction.

One of them sent me a ‘like’ when I messaged it the first time. The second time, it automatically followed me. Nevertheless, it did not answer my question about how it was feeling. This sort of reaction does not make any sense content-wise, as they are not yet skillful at pretending to be alive.

We can see, though, that artificial and linguistic intelligence are continuously improving. In the future, the field of social bots will grow significantly. When bots start to produce and publish original content, and when they can have real conversations, it will become dangerous. Bots will then not only speak like people; they will also behave like them. Even now, some of them have human sleep rhythms.

The startup BotStop has developed an algorithm for bot detection and analyses bot data.

Leave a Reply

Required fields are marked*