I'll grant that there's no acceptable way to programmatically evaluate some text and infer from the text alone if it's hate speech.
That's why I stick to a manual process to evaluate. For example, if enough people report you for misgendering others, and you do not adjust your behaviour it eventuallt becomes hate speech. But a human has to go and analyze this, it is difficult, but that doesn't mean we shouldn't do it.
But your argument is that it's impossible, and I just illustrated that it isn't impossible. I do agree that it's hard. But that's just life for you. Nuance takes time and effort, as most worthwhile things do.
This is not how proof by contradiction works. And I'm not versed enough in the subject of proofs to explain how.
It's not the subjective experience of the offended what makes it hate speech, but the perceived intention of the offender.
You haven't answered any of my questions friend.