Twitter has started rolling out more developed prompts to ask users to consider whether they really want to tweet potentially abusive content, having announced some encouraging results of a test that began nine months ago.
In August 2020, the social media site started detecting tweets that contained potentially “harmful or offensive language”, prompting the user to consider editing or deleting their tweet. Users were also given a “Did we get this wrong?” option if they believed Twitter was mistaken in flagging up their tweet as offensive.
According to a blog post yesterday called ‘Tweeting with consideration’, Twitter revealed that over a third (34%) of users either edited or deleted their tweet when prompted. It also seems that some users with a tendency to offend took the prompt as a warning that Twitter had its eye on them, with 11% posting fewer harmful Tweets after just one prompt. The action also had a beneficial effect on offensive tweeters themselves, who were less likely to receive replies in a similar tone to their own (Twitter has not put a figure on this).
Twitter now says it is incorporating the learnings from this trial into prompts being rolled out on Android and iOS. Among these are more consideration of the relationship between two users – for example, if two accounts follow each other and interact regularly, it can be assumed that any tweets flagged as “offensive” are more likely to be instances of sarcasm and friendly banter.
For anyone, including businesses, who may have experienced abuse or trolling on Twitter, this will be welcome news. It’s a timely intervention too, coming on the back of sport’s social media boycott last weekend and the news that perhaps the most infamous banned Twitter user, former US President Donald Trump, has set up his own communications platform that only he can use.
Some critics will only see this as a small step in the right direction, however. Unbelievably, even during the shutdown, there were still reports of players being racially abused on social media.
Another matter is how well Twitter will be able to put these “harmful and offensive” tweets into context. One of the most notorious examples of computers failing to grasp the subtleties of language is the “Scunthorpe problem”, which you can read about here. Should tweets be flagged up clumsily, the encouraging statistics about how abusive tweeters reconsider their actions instead run the risk of stifling interaction and frustrating users.
It is to be hoped that Twitter’s learnings can help this well-intended change to run smoothly. If you have suffered abuse or cyberbullying on Twitter’s rival platform Facebook, we have an eBook on tracing a fake profile that may help you. Alternatively, for any questions you have about social media management, the Engage Web team is here to help.
- How to find a circular reference on Excel - May 23, 2024
- Five life skills learned from internet marketing - January 3, 2024
- How artificial intelligence can (and can’t) help you write content - September 29, 2023
[…] a blog last week about Twitter’s new “want to review this tweet?” prompts, I briefly mentioned the […]