Preventing the creation of hateful or explicit usernames
I would like to propose that Twitch implements much stricter rules surrounding username creation and the terms which are allowed to be used in usernames. Users are still allowed to create usernames which are hateful, sexually explicit and degrading, despite this behaviour being against the Twitch Community Guidelines.
I appreciate that moderators and streamers are able to ban and report these accounts, but stopping this behaviour should NOT be the sole responsibility of content creators and their team. Furthermore, this method merely addresses the problem AFTER it has already happened. We need Twitch to implement a system which can identify harmful usernames and prevent them from being created in the first place.
I am proposing this out of both a need to protect streamers and their communities. I was targeted by numerous accounts, with usernames including the statements 'all lives matter' and 'no consent', as well as phrases relating to graphic ****** acts and anatomy. Although my mods were able to ban and report these individuals, the usernames were still visible to my community and myself, and affected me deeply. Twitch needs to do more to protect content creators, and the simplest way to do this is by addressing the problem BEFORE it happens - by preventing the creation of these harmful usernames.
I would like to remind people of the Twitch Community Guidelines, specifically relating to Hateful Conduct and Harassment, and a couple of these guidelines which were clearly violated from my own experience:
Content that is prohibited under our Hateful Conduct policy—regardless of whether it is intended to be hateful—includes, but is not limited to the following:
- Content that encourages or supports the political or economic dominance of any race, ethnicity, or religious group, including support for white supremacist/nationalist ideologies
The following categories of behaviors are considered to be sexually harassing, and are prohibited on Twitch:
- Making unsolicited objectifying statements relating to the body parts or practices of another person
- Making unsolicited statements in reference to performing graphic acts on another person
(And perhaps most relevant:)
- Creating accounts dedicated to harassment or hate, such as through abusive usernames
I understand that this may be very difficult to implement, however it would greatly improve the experience for both content creators and their communities, and foster a much safer environment. I believe that actively ensuring hateful behaviour has no place on Twitch is worth the challenges that might arise.
One way they could manage this without needing a human to do it would be a simple thing -
Have context based on the origins of the name. Someone from India for example would be allowed things like anushgarg ...
Likewise a simple read through of the name - is there letters in places like All becoming 4ll is there an i being replaced by a 1 such as l1ves or is a 3 used instead of e ... - these are simple ways. IF a letter or letters are being replaced by numbers, a simple replace of letters WITH numbers, and a comparator then against banned terms should cover that.
I agree it's not going to be easy, but Twitch has got away for YEARS with not doing the safeguarding work. It's time to hold them accountable
I agree that this may be very difficult to implement, however it is not our problem, as content creators, to solve - it's Twitch's. Twitch is a multi-billion dollar company, and if other companies like Activision and Nintendo can implement safeguarding for usernames, so can Twitch. It's not our duty to think of a solution, it's time for them to step up. Even an imperfect system is better than no system at all.
Seems almost impossible to implement. The only way to have an affective filter is to have humans check the names which just isn't going to happen. The next solution would be an algorithm, but how would an algorithm know what would be offensive?
Sure, exact name matches could be blocked. But that leaves you with the variations. You could try to filter those out by using the World's Best Algorithms by Oliver (ISBN 0-131-00413-1), but that will also block names that aren't offensive.
As @ashmiaou already wrote: offensive terms can be cultural or geographically defined which makes the whole thing even harder.
I agree that there is a problem with offensive names. The solution however is quite difficult.
This NEEDS to be implemented but at the same time it needs to be decoupled from western-centric views on what is and isn't an acceptable name. I know that twitch.tv/anushagarg had issues with her name being censored for many streamers due to the first 4 letters even though it is a common Indian name, and had to get in direct contact with Twitch for these issues to be resolved.
A potential way to solve this could be a whitelist of names that may get picked up for "inappropriate" content by the system?
I am so fed up of seeing accounts with extremely graphic names that are clearly used to troll/hate raid people.
Not sure how this isn't already a thing!! Like seriously twitch WHAT ARE YOU DOING?
You can make silly commands but not protect your creators
It'd be relatively easy to implement this sort of thing, actually, although it'd be difficult to implement it entirely autonomously without impeding creation of normal usernames. That being said, I think this is a wonderful idea.
This seems like such a simple and needed concept,but if it can't be fine-tuned maybe an automod for usernames that are allowed to follow and chat which can be approved by mods?