Yup.
It’s a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.
If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).
This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don’t traumatize the moderators.
Honestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
Bsky already does that.
Yup.
It’s a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.
If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).
This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don’t traumatize the moderators.
Agreed. These jobs are overwhelmingly concentratedin developing nations and pay pathetic wages, too.
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
They will probably use the YouTube model - “you’re wrong and that’s it”.
😂😂😂😔
Not suitable for Lemmy?
Not sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…