So many problems with this. I assume they’re thinking of using an LLM since it would need to read language. It would need to adapt to our ever-moving Internet culture, knowing what intent is meant.
How well does it know irony? Slang? Taboo topics? Fresh new gen-z TikTok language?
“He should step on lego… in a video game…”, no way it will work at this early stage of AI.
I think AI could be useful to help actual human moderators to THEN determine if the activity is bad or not. But that’s only doing some of the work.
I think manual reports from the users goes a long way on its own.
So many problems with this. I assume they’re thinking of using an LLM since it would need to read language. It would need to adapt to our ever-moving Internet culture, knowing what intent is meant.
How well does it know irony? Slang? Taboo topics? Fresh new gen-z TikTok language?
“He should step on lego… in a video game…”, no way it will work at this early stage of AI.
I think AI could be useful to help actual human moderators to THEN determine if the activity is bad or not. But that’s only doing some of the work.
I think manual reports from the users goes a long way on its own.