Basic chat protection features
Already back in the golden days of IRC - late 90s, early 2000s, it was common sense for both IRC network operators, and custom bots run to moderate channels, to have built in protections against hateful and disrupting attacks and other common abuse of the service.
These kind of protections would include mitigations against:
Mass creation of accounts
Trying to connect to the network too many times from one IP
Too many joins/messages/similar to a single channel
Mass spam from multiple automated or manually operated actors
Twitch should look into the already widely adopted best practices and implement similar practices on Twitch itself to:
Limit amount of accounts that can be easily created. This needs REQUIRING email verification and not just allowing users to skip it, and keeping a score for the account so that if they are a reasonably good human being, verify their identity in an UNIQUE and properly identifiable way (email addresses can easily be created in large amounts, using e.g. catch-all addresses for a domain), give some additional points for subbing or buying bits, deduct points for getting banned or automatic flags that make them suspicious. Shadowban + require review for any that are flagged as suspicious. There should be enough time consuming hoops to jump through to get your account quickly fully verified to make it difficult and unreasonable for hateful bot armies to be created and abused at a rapid pace.
Automatically flag usernames with too much entropy as highly suspicious.
Automatically disable all notifications for followers, raids, hosts, etc. when they seems abusive or suspicious in nature. Flag accounts participating in these events automatically, instead of requiring users to paraphrase related usernames and get the wrong accounts banned.
When there are more than a handful of accounts connected from a single IP, or there are connections to accounts from suspicious IP ranges, require strong ID verification before they are allowed to interact on any channel.
Count each report / ban / timeout / block against the score of the user and IP block, phone number, email address/domain, payment method, etc., so after a certain amount of "social credit" is lost due to hateful conduct they would have their accounts mass-shadowbanned and they would have to work harder to rebuild their bot army.
Give streamers control over the rate limits, but set them to reasonably sensitive low limits by default so most people are protected from the most hateful of events without them needing to jump through hoops.
Allow streamers to override these settings at will, or temporarily, when it seems it has been triggered on error - make e.g. a chat message pop up for the streamer and mods saying "Automatic hate raid protection enabled - if you believe this is in error [Disable protection for 10 minutes]"