top of page

Gettin' Heated


A new feature on Twitter will warn users of when a conversation is to become "heated".


Twitter has recently announced plans for a new feature that will warn users when a conversation is about to become “heated” or “intense”. They’re testing out a prompt that will pop up in the conversation to warn on the intensity, remind people of the “humans” behind the screen, and that “diverse opinions have value”.





There will be a lot of factors that will come into play with this new feature. Not just with the coding and logistics behind it, but of course the users. Twitter has been experimenting with new ways to reduce toxicity on its app since at least last year, but this could be an entirely different ballgame.


AI at Twitter has already been learning and growing on the subject of toxicity. In an interview from early 2020, Kayvon Beykpour (Twitter’s head of product) mentioned that the most important thing for AI’s to properly learn to sort through the “toxic” tweets is strict adherence to their policies and regulations.


“Basically we’re trying to predict the tweets that are likely to violate our rules. And that’s just one form of what people might consider abusive, because something that you might consider abusive may not be against our policies, and that’s where it gets tricky.” - Kayvon Beykpour [ x ]

His last line really stands out-- how does one begin to train a machine to find offensive tweets if the definition is ever-changing and, at times, completely subjective?




What constitutes a “healthy” conversation, and can a machine truly be in charge of that for the entire platform?



Some people think the feature itself is “infantizing” its users, and further dividing society by creating constraints on what can and cannot be said.


It’s right back to the eternal debate of: are we too sensitive or not?


Personally, this writer doesn’t see an issue with the feature, as long as it can work properly. It is almost like counting to 10 before saying or doing something. It does seem like a huge undertaking for an AI to learn what a healthy conversation is, but even if the post is flagged as such, it isn’t stopping its users from engaging. Having the reminder that the person behind the other screen is a being with their own thoughts, feelings and opinions isn’t a bad thing. We’ve all gotten heated, and that moment’s pause could be all we need to properly collect our thoughts.


Conversations online can easily spiral-- like I mentioned last time, you miss many conversational cues and nuances when you’re simply sitting behind a screen. Twitter is a major player in this, with simple tweets spiraling into full-blown arguments in a matter of hours. In 2019, there was even a trending hashtag of #StartAnArgumentInFourWords. Which, while I’m sure it was meant as a harmless joke, how many arguments did start?


Things like Twitter’s new feature, or Facebook’s fact-checking pop-up, can be a nice reminder to all of us to reconnect with real communication if utilized properly. Instead of firing off that round of angry tweets to someone you don’t know, without knowing all of the information, take that second to reevaluate.


Continued open conversation and allowing ourselves to be authentic (without malice towards others) is what will help us grow as a society. If the repercussions of a potentially heavy or important conversation make you nervous, the Z Form can help.

8 views0 comments

Recent Posts

See All

Comments


bottom of page