Preventing the creation of hateful or explicit usernames
I would like to propose that Twitch implements much stricter rules surrounding username creation and the terms which are allowed to be used in usernames. Users are still allowed to create usernames which are hateful, sexually explicit and degrading, despite this behaviour being against the Twitch Community Guidelines.
I appreciate that moderators and streamers are able to ban and report these accounts, but stopping this behaviour should NOT be the sole responsibility of content creators and their team. Furthermore, this method merely addresses the problem AFTER it has already happened. We need Twitch to implement a system which can identify harmful usernames and prevent them from being created in the first place.
I am proposing this out of both a need to protect streamers and their communities. I was targeted by numerous accounts, with usernames including the statements 'all lives matter' and 'no consent', as well as phrases relating to graphic ****** acts and anatomy. Although my mods were able to ban and report these individuals, the usernames were still visible to my community and myself, and affected me deeply. Twitch needs to do more to protect content creators, and the simplest way to do this is by addressing the problem BEFORE it happens - by preventing the creation of these harmful usernames.
I would like to remind people of the Twitch Community Guidelines, specifically relating to Hateful Conduct and Harassment, and a couple of these guidelines which were clearly violated from my own experience:
Content that is prohibited under our Hateful Conduct policy—regardless of whether it is intended to be hateful—includes, but is not limited to the following:
- Content that encourages or supports the political or economic dominance of any race, ethnicity, or religious group, including support for white supremacist/nationalist ideologies
The following categories of behaviors are considered to be sexually harassing, and are prohibited on Twitch:
- Making unsolicited objectifying statements relating to the ****** body parts or practices of another person
- Making unsolicited statements in reference to performing graphic ****** acts on another person
(And perhaps most relevant:)
- Creating accounts dedicated to harassment or hate, such as through abusive usernames
I understand that this may be very difficult to implement, however it would greatly improve the experience for both content creators and their communities, and foster a much safer environment. I believe that actively ensuring hateful behaviour has no place on Twitch is worth the challenges that might arise.
I think the rules on usernames should indeed be very restrictive. Anything that can be offensive at all should be restricted. Even if the name was not made with the intention of hurting someone, even if it means one thing in one language and another in a different one, if it has the potential to hurt someone, then just pick a new name.
A great idea, if implemented correctly.
Please do not do automod anything server side (Twitch)! It is so annoying that English vulgar words are non vulgar in other languages and it is restricted this way. Like an simple Hungarian Word "vagy" (means "or" in English) is automod on predictions since beginning. And I can not write "win or not" (Nyerés vagy nem) because the system thinks I wrote vag...vagina.
And guideline is not strictly used by Twtich (what is somehow not fair). A hot tub streamer who did just chatting and I only saw a big boobs in the 70% of the camera is against the guideline rules. But some were banned for short some not.
This whole concept seems like a no brainer. However, if it is too difficult to carry out, at least allowing an automod for usernames that are allowed to follow and chat which can be approved by mods would be very beneficial for all streamers.
Censorship is bad. Just block people if their name offends your feelings.
I think that one way to implement this is to defer username visibility if part or all of it creates a conflict with Terms of Service.
So for example, an offensive username like come_on_my_t*** could be automatically filtered and instead display as Lee_roy_Jen_kins because one of the items separated by underscores is flagged, so it also invalidates 'on' 'my' and 'come.' Moderation view will show the hidden username so that moderation actions apply to the correct account.
Note that for the purposes of filtering, the underscores can be any character or no character at all, so cometonamyot*** and comeonamyt*** would also be filtered.
The reverse can also be true, if a username would usually be flagged but the context validates it. For example, comet_watcher might be flagged because it contains t_wat but the system will accept it because both comet and watcher are acceptable terms.
Deferring full visibility is great because it means that the system does not have to approve the username in real time (or close to real time). Considering the number of possible permutations of terms that can fit into a valid length, and the possibility that attackers might exploit username creation as a way to cause denial of service, appearing to allow a username only to filter part or all of it pending approval and then later require the user to alter it or contact support to get it approved is a sensible use of resources.
One way they could manage this without needing a human to do it would be a simple thing -
Have context based on the origins of the name. Someone from India for example would be allowed things like anushgarg ...
Likewise a simple read through of the name - is there letters in places like All becoming 4ll is there an i being replaced by a 1 such as l1ves or is a 3 used instead of e ... - these are simple ways. IF a letter or letters are being replaced by numbers, a simple replace of letters WITH numbers, and a comparator then against banned terms should cover that.
I agree it's not going to be easy, but Twitch has got away for YEARS with not doing the safeguarding work. It's time to hold them accountable
I agree that this may be very difficult to implement, however it is not our problem, as content creators, to solve - it's Twitch's. Twitch is a multi-billion dollar company, and if other companies like Activision and Nintendo can implement safeguarding for usernames, so can Twitch. It's not our duty to think of a solution, it's time for them to step up. Even an imperfect system is better than no system at all.
Seems almost impossible to implement. The only way to have an affective filter is to have humans check the names which just isn't going to happen. The next solution would be an algorithm, but how would an algorithm know what would be offensive?
Sure, exact name matches could be blocked. But that leaves you with the variations. You could try to filter those out by using the World's Best Algorithms by Oliver (ISBN 0-131-00413-1), but that will also block names that aren't offensive.
As @ashmiaou already wrote: offensive terms can be cultural or geographically defined which makes the whole thing even harder.
I agree that there is a problem with offensive names. The solution however is quite difficult.
This NEEDS to be implemented but at the same time it needs to be decoupled from western-centric views on what is and isn't an acceptable name. I know that twitch.tv/anushagarg had issues with her name being censored for many streamers due to the first 4 letters even though it is a common Indian name, and had to get in direct contact with Twitch for these issues to be resolved.
A potential way to solve this could be a whitelist of names that may get picked up for "inappropriate" content by the system?
I am so fed up of seeing accounts with extremely graphic names that are clearly used to troll/hate raid people.
Not sure how this isn't already a thing!! Like seriously twitch WHAT ARE YOU DOING?
You can make silly commands but not protect your creators
It'd be relatively easy to implement this sort of thing, actually, although it'd be difficult to implement it entirely autonomously without impeding creation of normal usernames. That being said, I think this is a wonderful idea.
This seems like such a simple and needed concept,but if it can't be fine-tuned maybe an automod for usernames that are allowed to follow and chat which can be approved by mods?