As fas as I know ChatGPT (3) will not output anything that might offend. Is that the start of benevolence?
But who decides what might offend? And doesn't that put us on the slippery slope to censorship? You can't not offend everybody, the only way to do that is to say nothing.
Usually not. Usually it's some self-appointed guardian deciding that other people will be offended. The people deciding what is and isn't allowed (in what's supposed to be a free-speech society, at least where I live). See self-appointed guardians above. I'm offended. But this is rapidly getting too specific and heading toward debate-room material. I think we both know where we stand on the issues.
What an interesting question. I'm thinking not allowing offensive output is not the same as benevolence. The former relies on a system of checks, but the latter relies on the capacity for doing a kindness - it's more active.
I think most of us would recognize hateful, threatening or offensive content if we saw it. If we can save human social media moderators from the distressing job of identifying it by outsourcing it to AI, I think that's a good thing?
Twitter, YouTube, Facebook and Reddit all use AI to help identify hateful content. Is it as good as human moderation? Probably not. I read about one online community that banned mentions of Trump, but the community adapted to dodge the AI, by renaming Trump Drumf.
there speaks someone who’s not been a moderator… this forum is fairly tame but we still had to recently amend the rules to specifically outlaw graphic descriptions of necrophillia… We hadn’t previously thought it necessary to specify that a piece about someone’s love of sex with corpses was outwith acceptable conduct you couldn’t pay me enough to moderate some of the shit you find on Facebook or twitter if AI can take some of the weight then I’m for it … although the issue for me would be false positives like AIs not allowing you to write Scunthorpe and the like
Not for 8 hours a day, every day. The mental toll is real. Some actually develop PTSD. Facebook moderators sign a disclaimer before they start work accepting that their job could lead to PTSD and mental stress. Facebook moderator: ‘Every day was a nightmare’
There’s also been cases of Facebook moderators being radicalised by constant exposure to extremist propaganda and of them developing sexual problems from viewing far too much deeply unpleasant pornography we don’t have the same scale of problem but we did recently have an issue where one member send another a picture of his dick… I had to look to make sure it really was a dick pic and not a sausage or a mole rat or whatever before banning his sick ass I didn’t particularly want to view a picture of someone’s genitalia, but moderators look at that shit so the rest of the membership don’t have to. No one loves doing that side of the job unless there is something wrong with them ( in which case they probably shouldn’t be moderating)
I don’t know, hard for me to speak super confidently since I’m not on any of those sites, let alone see the worst of the worst every day. Here is what I wanted to say: Descriptions of bad stuff monte has seen IRL, somewhat ironically, deleted by moderator Though upon some more reflection, maybe seeing the worst of the worst of humanity 8 hours a day, 252 days a year is worse than all of that. I’m not being sarcastic. I guess it depends on what type of content they are looking at. If it is pictures of kiddie porn and human slaughter, then yeah. But if most of it is misinformation and bigoted comments, then I’d probably stand by my initial reaction.
It's true. Psychiatrists and therapists of all kinds frequently have nervous breakdowns from dealing with other peoples' issues. To some it might be fun to see it, if you're not the one who has to deal with it, and if it isn't all day, every day. But we need to be careful about who is programming their idea of morality into our new overlords, and what their purpose is. It can too easily be used to shut down opposing viewpoints or try to shape society to some bizarre ideology. Machines can't think, they don't know what's actually being said, they're programmed to respond to certain flag words and phrases.
I guess the more I think about, the less confident I am I have a strong opinion. If these moderators are constantly seeing kiddie porn and human slaughter, yeah I can see the PTSD angle. If it’s dick pics and bigoted comments, I’m more inclined to stick with my initial reaction .
Facebook is more the former - we can't give a list of all the things they see without violating our own rules. Even at the tamer end though few people want to look at dick picks for eight hours a day, or for that matter referee he said/she said arguments A moderators on a forum like this sees that stuff once or twice a month at most...and that's spread out between 5 or 6 of us, a social media moderator sees that stuff every day, multiple times a day and has very little in the way of peer support... i wouldnt have thought you'd get PTSD from it but I could certainly see it being frustrating, irritating, and stressful This is a fairly typical account of what its like https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/
I definitely have a strong opinion, but the answer isn't simple, and needs a lot of discussion and a lot of thinking. GLITCH IN THE MATRIX! GLITCH IN THE MATRIX!! (It needs to always be said twice) Didn't you already say this in your last response?
@big soft moose it is probably the quotas that do them in. “Congrats on your promotion to Content Supervisor Lead. You must now eliminate 120 dick pics an hour to maintain your new wage of $8.50/hr”
He did but i deleted that post [ironic i know] while I edited it, then restored it, i imagine the repost happened while I was doing that
*Starts drinking two liters of tequila a day rather than just one* "Ah yes kids, this is success! (in W C Fields voice)"
going in the other direction the EU have told twitter to hire more human moderators to comply with the EU digital services bill which comes into force next year, and to reduce their reliance on AI https://www.reuters.com/technology/eu-tells-elon-musk-hire-more-staff-moderate-twitter-ft-2023-03-07/