nappy-doo 5 days ago

I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)

I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."

I don't know if she eventually got the job, but I hope she didn't.

  • narrator 5 days ago

    Yes, these jobs are horrible. However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.

    It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.

    • noduerme 5 days ago

      Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.

      On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.

      • karlgkk 5 days ago

        Frankly

        there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it

        Versus straight up mainlining it for 8 hours a day

      • stephenitis 5 days ago

        hey kid, hope you're having a good life. I'll look at the screen full of the worst the internet and humanity has produced on the internet for eight hours.

        I get your idea but in the context of this topic I think you're overreaching

      • Der_Einzige 5 days ago

        Actually reckoning with this stuff leads people into believing in anti-natalism, negative utilitarinism, Scopenhaur/Philipp Mainlander (Mainlander btw was not just pro-suicide, he actually killed himself!), and the voluntary extinction movement. This terrified other philosophers like Nietzsche, who spends most of his work defending reality even if it's absolute shit. "Amor Fati", "Infinite Regress/Eternal Recurrence", "Übermensch" vs the literal "Last Man". "Wall-E" of all films was the modern quintessential nietzschian fable, with maybe "Children of Men" being the previous good one before that.

        You're literally not allowed to acknowledge that this stuff is bad and adopt one of the religions that see this and try to remove suffering - i.e. Jainism, because at least historically doing so meant you couldn't use violence in any circumstances, which also meant that your neighbor would murder you. There's a reason that Jain's population are in the low millions

        Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!

        • bowsamic 4 days ago

          I felt this way for the first 30 years of my life. Then I received treatment for depression (psychoanalysis) and finally tasted joy for the first time in my entire life. Now I love life. YMMV

          EDIT: If you’re interested what actually happened is that I was missing the prerequisite early childhood experience that enables one to feel secure in reality. If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so. For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke

          • Der_Einzige 4 days ago

            But psychoanalysis is literally psudoscientific nonsense. You got spooked.

            • motorest 4 days ago

              > But psychoanalysis is literally psudoscientific nonsense. You got spooked.

              OP got spooked to stop suffering and love life instead? Is that your cautionary tale?

              • lazide 4 days ago

                ‘Warning! You may end up irrationally happy and fulfilled!’

            • bowsamic 4 days ago

              No it isn’t, it’s empirically justified, look it up. Hence why the state insurance here in Germany is willing to pay for me to go three times a week. It works

            • BriggyDwiggs42 4 days ago

              Wow, thanks for your valuable contribution to the conversation!

          • keybored 4 days ago

            [deleted]

            • bowsamic 4 days ago

              I'm not sure what point you are trying to make. I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it. It just worked for me and I realised that in my case the depression was a confused outlook conditioned by a certain situation.

              My point really is that you can feel one way for your entire life and then suddenly feel a different way. I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.

              The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.

              YMMV = not everyone hates life

              • keybored 4 days ago

                > I'm not sure what point you are trying to make.

                I’m not sure why people act coy when a straightforward mirroring of their own comment is presented. “What could this mean?” Maybe the hope is that the other person will bore the audience by explaining the joke?

                > I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it.

                Talking about your infant parental relationship as the be-all-end-all looks indistinguishable from that.

                > > If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so.

                .

                > I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.

                Except for people who have “this feeling of philosophical/ontological pessimism”.

                > > For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke

                Which must paint everyone who defends “suffering” in the Vedic sense. Since that was what you were replying to. (Saying that reality is suffering on-the-whole is not the same as “I’m depressed [, and please give me anecdotes about how you overcame it]”.)

                > > The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.

                I don’t know how broad your brush is. But believing in the originally Vedic (Schopenhauer was inspired by Eastern religions, maybe Buddhism in particular) concept of “suffering” is not such a fragile intellectual framework that it collapses once you heal from the trauma when your mother scolded you while potty training at a crucial point in your Anal Stage of development.

                > YMMV = not everyone hates life

                Besides any point whatever.

                • bowsamic 4 days ago

                  Worth noting that I trained formally in Buddhism under a teacher for a few years. I’m not unaware of all this

                  And the Vedic version of suffering is all full of love for reality, not wanting to delete it by smashing a button

                  • keybored 4 days ago

                    > Worth noting that I trained formally in Buddhism under a teacher for a few years. I’m not unaware of all this

                    You trained personally for a few years and yet you make such sweeping statements/strokes that a neophyte is prompted to point out basic facts about this practice (apparently an adequate retelling since you don’t bother to correct me)? You might think this bolsters something (?) but I think the case is the opposite.

                    It helps to point out exactly what part that you are talking about (apparently not the Vedic gang). In fact this initial reply (just the above paragraph before the edit) seemed so out of place. Okay, so what are they talking about?

                    > And the Vedic version of suffering is all full of love for reality, not wanting to delete it by smashing a button

                    Oh, so it’s about the small wish to commit biocide.

                    It’s a clear category error to talk about love/want/hate when it comes to that statement. Because that’s beside the point. The point is clearly the wrongheaded, materialistic assumption that suffering will end if all life would end by the press of a button. And if you think that life on the whole is suffering? Then pressing the button is morally permissible.

                    It’s got nothing to do with hate.

                    It seemed interesting to me that someone would have such a “Schopenhauer” (not that I have read him) view of existence. You don’t see that every day.

                    • bowsamic 4 days ago

                      I don’t really know what you’re talking about, sorry. This is coming off as incoherent rambling to me

                      • keybored 4 days ago

                        My comment was saying that this part was about ending suffering, not about wishing ill-will. I don’t understand what’s unclear.

                        > > Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!

                        > This is coming off as incoherent rambling to me

                        You do like to gesture vaguely and tell me that "I don’t know what this is". Meanwhile I have pointed out at least one instance where you flat out just contradicted yourself on psychoanalysis. Or "incoherent rambling" (on psychoanalysis) if you will

        • keybored 4 days ago

          Interesting to see this perspective here. You’re not wrong.

          > There's a reason that Jain's population are in the low millions

          The two largest Vedic religions both have hundreds of millions of followers. Is Jainism that different from them in this regard? I know Jainism is very pacifist but on the question of suffering.

      • keybored 5 days ago

        ... okay.

        Emergency personnel might need to braze themselves for car accidents every day. That Kenyans need to be traumatized by Internet Content in order to make a living is just silly and unnecessary.

        • portaouflop 5 days ago

          Car “accidents” are also completely unnecessary.

          Even the wording is wrong - those aren’t accidents, it is something we accept as byproduct of a car-centric culture.

          People feel it is acceptable that thousands of people die on the road so we can go places faster. Similarly they feel it’s acceptable to traumatise some foreigners to keep social media running.

          • keybored 4 days ago

            Nitpick that irrelevant example if you want.

        • ashoeafoot 5 days ago

          ISISomalia loves that recruitment pool though

    • FireBeyond 5 days ago

      Speaking as a paramedic, two things come to mind:

      1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.

      2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"

      • Modified3019 5 days ago

        That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.

        I won’t watch most movies or TV because they are just some sort of tragedy porn.

        • coliveira 5 days ago

          > movies or TV because they are just some sort of tragedy porn

          100% agree. Most TV series nowadays are basically violence porn, now that real porn is not allowed for all kinds of reasons.

      • ocschwar 5 days ago

        I'd be asking "how bad is the fentanyl situation in your are?"

        • FireBeyond 5 days ago

          Relatively speaking, not particularly.

          What's interesting now is how many patients will say "You're not going to give me fentanyl are you? That's really dangerous stuff", etc.

          Their perfect right, of course, but is sad that that's the public perception - it's extremely effective, and quite safe, used properly (for one, we're obviously only giving it from pharma sources, with actually properly dosed solutions for IV).

      • BobaFloutist 5 days ago

        It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"

    • andrepd 5 days ago

      I'm pretty sure watching videos on /r/watchpeopledie or rekt threads on 4chan has been a net positive for me. I'm keenly aware of how dangerous cars are, that wars (including narcowars) are hell, that I should never stay close to a bus or truck as a pedestrian or bycicle, that I should never get into a bar fight... And that I'm very very lucky that I was not born in the 3rd world.

      • gosub100 5 days ago

        I get more upset watching people lightly smack and yell at each other on public freakout than I do watching people die. It's not that I don't care about the dead either, I watched wpd and similar sites for years. I didn't enjoy watching it, but I liked knowing the reality of what was going on in the world, and how each one of us has the capacity to commit these atrocities. I'm still doing a lousy job at describing why I like to watch it. But I do.

        • mdanger007 5 days ago

          Street fight videos, where the guy recording is Hooting and egging people on are disgusting

    • doublerabbit 5 days ago

      One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.

      I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.

      • batch12 5 days ago

        That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.

      • wholinator2 5 days ago

        Well, i think it goes to a point. I'd imagine there's some goldilocks zone of time spent with the animal, care experienced from the animal, dependence on the animal, and manner/speed of death/ time spent watching the thing die.

        I say animal to explicitly include humans. Finding my hamster dead in fifth grade did change me. But watching my mother slowly die a horrible, haunting death didn't make me a better person. I'm just saying that there's a spectrum that goes something like: easy to forget about, I'm able to not worry, sometimes i think about it when i dont want, often i think about it, often it bothers me, and do on. You can probably imagine the cycle of obsession and stress.

        This really goes for all traumatic experiences. There's a point where they can make you a better person, but there's a cliff after which you have no guarantees that it won't just start obliterating you and your life. It's still a kind of perspective. But can you have too much perspective? Lots of times i feel like i do

    • h_tbob 9 hours ago

      I concluded that we really should have the speed limit at 45mph on highways. Then one body dying on the road would be so rare it would be newsworthy.

    • int_19h 5 days ago

      It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.

      It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).

      But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.

      • Dalewyn 5 days ago

        >Crossbows can increase this to ~400 J.

        Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.

        Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.

      • llm_trw 5 days ago

        Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.

        • keybored 4 days ago

          > Even the modern battlefield is more humane than the African savannah.

          On behalf of dead WWI soldiers I find this offensive.

        • BehindBlueEyes 3 days ago

          Yeah, I tracked a lost dog and found the place it was caught by wolves and eventually eaten. Terrible way to go. I get now why the owner was so desperate to find it, even without any hope of the dog surviving - I'd want to end it quicker for my dogs too if this happened to them.

      • newsclues 5 days ago

        Humans can render other humans unrecognizable with a rock.

        Brutal murder is low tech.

        • graemep 5 days ago

          > Humans can render other humans unrecognizable with a rock.

          They are much less likely to.

          We have instinctive repulsion to violence, especially extending it (e.g. if the rock does not kill at the first blow).

          It is much easier to kill with a gun (and even then people need training to be willing to do it), and easier still to fire a missile at people you cannot even see.

          • meiraleal 5 days ago

            Than throwing a face punch or a rock? You should check public schools.

            • ternnoburn 5 days ago

              Than killing with bare hands or a rock, which I believe is still pretty uncommon in schools.

              • meiraleal 5 days ago

                GP didn't talk about killing

                • graemep 5 days ago

                  Extreme violence then? With rocks, clubs of bare hands? I was responding to "render other humans unrecognizable with a rock" which I am pretty sure is uncommon in schools.

                • ternnoburn 5 days ago

                  Render unrecognizable? Yeah, I guess that could be survivable, but it's definitely lethal intent.

                  • meiraleal 4 days ago

                    That's possible with just a well placed punch to the nose or to one of the eyes. I've seen and done that, in public schools.

                    • ternnoburn 4 days ago

                      Uh... sure, maybe during the initial swelling. But that's temporarily rendered unrecognizable.

                      If you caused enough damage such that someone could not later be recognized, I wonder why you are not in prison.

            • graemep 5 days ago

              Not in public schools in the British sense. I assume it varies in public schools in the American sense, and I am guessing violence sufficient to render someone unrecognisable is pretty rare even in the worst of them.

        • int_19h 5 days ago

          Not at scale.

          • newsclues 5 days ago

            Armies scale up.

            It’s like the original massive scale organization.

            • paulryanrogers 5 days ago

              Scaling an army of rock swingers is a lot more work than giving one person an AK47 (when all who would oppose them have rocks).

              (Thankfully in the US we worship the 2A and its most twisted interpretation. So our toddlers do shooter drills. /s)

              • newsclues 5 days ago

                You are discounting the complexity of the logistics required for an AK47 army. You need ammo, spare parts, lubricant and cleaning tools. You need a factory to build the weapon, and churn out ammunition.

                Or, gather a group of people, tell them to find a rock, and go bash the other sides head.

                • int_19h 5 days ago

                  Complexity of logistics applies to any large army. The single biggest limiting factor for most of history has been the need to either carry your own food, or find it in the field. This is why large-scale military violence requires states.

                • com2kid 5 days ago

                  > You need ammo, spare parts, lubricant and cleaning tools.

                  The ak-47 famously only needs the first item in that list.

                  That being the key to its popularity.

                  • int_19h 5 days ago

                    It should be noted that the purported advantages of AK action over its competitors in this regard are rather drastically overstated in popular culture. E.g. take a look at these two vids showing how AK vs AR-15 handle lots of mud:

                    https://www.youtube.com/watch?v=DX73uXs3xGU

                    https://www.youtube.com/watch?v=YAneTFiz5WU

                    As far as cleaning, AK, like many guns of that era, carries its own cleaning & maintenance toolkit inside the gun. Although it is a bit unusual in that regard in that this kit is, in fact, sufficient to remove any part of the gun that is not permanently attached. Which is to say, AK can be serviced in the field, without an armory, to a greater extent than most other options.

                    But the main reason why it's so popular isn't so much because of any of that, but rather because it's very cheap to produce at scale, and China especially has been producing millions of AKs specifically to dump them in Africa, Middle East etc. But where large quantities of other firearms are available for whatever reason, you see them used just as much - e.g. Taliban has been rocking a lot of M4 and M16 since US left a lot of stocks behind.

                    • newsclues 4 days ago

                      Main advantage in the Ukraine conflict for AKs is the ammo availablility.

                      • int_19h a day ago

                        The only small arms cartridge plant that Ukraine had originally was in Luhansk, so it got captured even before 2022. It's only this year that they've got a new plant operational, but it produces both 5.45 and 5.56.

                        And Western supplies are mostly 5.56 for obvious reasons, although there are some exceptions - mostly countries that have switched fairly late and still have substantial stocks of 5.45, such as Bulgaria. But those are also limited in quantity.

                        So in practice it's not quite so simple, and Ukraine seems to be aiming for 5.56 as their primary cartridge long-term, specifically so that it's easier for Western countries to supply them with guns and ammo.

                  • newsclues 4 days ago

                    If you think the AKs in use in Russia and Ukraine aren’t getting regular maintenance, cleaning and spare parts, I don’t think you’re watching enough of the content coming out of the war zone.

                    Soldiering isn’t sexy, it’s digging trenches, cleaning kit, and eating concussive blasts waiting to fight or die.

                    You don’t sit in a bunker all day waiting to defend a trench and not clean your gun.

                    • com2kid 17 hours ago

                      It was largely a joke, but even so famously many of the AKs used in various other conflicts were buried in backyards in-between wars.

    • HPsquared 5 days ago

      I'm no longer interested in getting a motorcycle, for similar reasons.

      • spacechild1 5 days ago

        I spent my civil service as a paramedic assistent at the countryside, close to a mountainroad that was very popular with bikers. I was never interested in motorbikes in the first place, but the gruesome accidents I've witnessed turned me off for good.

        • ocschwar 5 days ago

          The Venn diagram for EMTs, paramedics, and motorbikes is disjoint.

      • zmgsabst 5 days ago

        You’re only about 20x as likely to die on a motorcycle as in a car.

        What can I say? People like to live dangerously.

        • alamortsubite 5 days ago

          Yes, but you're also far less likely to kill other people on a motorcycle as in a car (and even less, as in an SUV or pick-up truck). So some people live much less dangerously with respect to the people around them.

        • HPsquared 5 days ago

          I suppose 20x a low number is still pretty low, especially given that number includes the squid factor.

    • nradov 5 days ago

      I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.

      • paganel 5 days ago

        My brother, an Eastern-European part-time farmer and full-time lorry driver, just texted me a couple of hours ago (I had told him I would call him in the next hour) that he might be with his hands full of meat by that time as “we’ve just butchered our pig Ghitza” (those sausages and piftii aren’t going to get made by themselves).

        Now, ask a laptop worker to butcher an animal whom used to have a name and to literally turn its meat into sausages and see what said worker’s reaction would be.

        • apelapan 4 days ago

          Laptop worker here. Have participated/been present in butcher of sheep and pigs and helped out making sausages a couple of times. It was fine. An interesting experience.

          There is a lot of skill going in to it, so I couldn't do it myself. You need guidance of someone who is knowledgeable and has the proper tools and facilities for the job.

      • Eisenstein 5 days ago

        What is it about partaking in or witnessing the killing of animals or humans that makes one more connected to reality?

        • AnarchismIsCool 5 days ago

          Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.

          • Eisenstein 5 days ago

            [flagged]

            • s1artibartfast 5 days ago

              I don't have any data, but my anecdotal experience is a yes to those questions.

              >Are there other ways we can get a sense of how a more healthy acceptance of mortality would manifest?

              In concept, yes, I think home family death can also have a similar impact. It is not very common in the US, but 50 years ago, elders would typically die at home with family. There are cultures today, even materially advanced ones, where people spend time with the freshly dead body of loved ones instead of running from it and compartmentalizing it.

            • Nasrudith 4 days ago

              It isn't farmers, but oncologists are notably more likely to choose hospice for themselves when it comes to cancer care. It is similar for familiarity with death + end of life decisions.

              Of course that case is probably related to knowing the actual probabilities and the suffering involved. Medicine isn't just "drink a potion and be instantly cured or instantly die", it is a long painful process.

            • sabbaticaldev 5 days ago

              [flagged]

              • Eisenstein 5 days ago

                Socratic questioning is not cluelessness and your inability to answer does not bolster your position.

                • zmgsabst 5 days ago

                  Socratic questioning requires the asker to have a deeper understanding whereby they guide with their questions.

                  Do you think that’s what people see in yours?

                  • Eisenstein 5 days ago

                    I don't know what people are seeing in my questions, but apparently they don't like answering them, because no one has.

                    I'm trying to understand what people mean by 'detachment from reality' and how such a thing is related to 'understanding of mortality', and how a deeper understanding of mortality and acceptance of death would manifest in ways that can be seen.

                    If 'acceptance of death' does not actually mean that they are more comfortable talking about death, or allowing people to choose their own deaths, or accepting their loved one's deaths with more ease, then what does it mean? Is it something else? Why can't anyone say what it is?

                    Why it is so obvious to the people stating that it happens, but no one can explain why the questions I asked are not being answered or are wrong?

                    If this is come basic conflict of frameworks wherein I am making assumptions that make no sense to the people who are making the assertions I am questioning, then what am I missing here?

                    • zmgsabst 4 days ago

                      > I don't know what people are seeing in my questions, but apparently they don't like answering them

                      > I'm trying to understand

                      Wouldn’t people be responding poorly to your questions, because they seem facetious when that’s precisely what people mean? — and obviously so?

                      Eg, my niece dealt better with pets dying than other kids her age I’ve known since her family regularly slaughters chickens.

                      > people mean by 'detachment from reality' and how such a thing is related to 'understanding of mortality'

                      This too is so obvious that people think you’re responding in poor faith — eg, the professional managerial class has destroyed multiple cities by being so detached from reality that they no longer imprison career criminals, resulting in social breakdown not even seen in many third world countries.

                      That’s why I don’t think it’s Socratic questioning: just you not understanding the basic implications or yourself being unaware of reality.

                      > no one can explain why the questions I asked are not being answered or are wrong?

                      I did say why: you come across as arrogant and ignorant by asking seemingly facetious questions about obvious implications — then comparing yourself to Socrates for doing so.

        • Dalewyn 5 days ago

          In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.

          The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.

          Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.

          The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.

          Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.

          • caymanjim 5 days ago

            Plenty of westerners are not as sheltered from their food as you. Have you never gone fishing and watched your catch die? Have you never boiled a live crab or lobster? You've clearly never gone hunting.

            Not to mention the millions of Americans working in the livestock and agriculture business who see up close every day how food comes to be.

            A significant portion of the American population engages directly with their food and the death process. Citing one gimmicky example of Asian culture where squirmy seafood is part of the show doesn't say anything about the culture of entire nations. That is not how the majority of Japanese consume seafood. It's just as anomalous there. You only know about it because it's unusual enough to get reported.

            You can pick your lobster out of the tank and eat it at American restaurants too. Oysters and clams on the half-shell are still alive when we eat them.

            • Dalewyn 5 days ago

              >Plenty of westerners are not as sheltered from their food as you. ... You only know about it because it's unusual enough to get reported.

              In case you missed it, you're talking to a Japanese.

              Some restaurants go a step further by letting the customers literally fish for their dinner out of a pool. Granted those restaurants are a niche, that's their whole selling point to customers looking for something different.

              Most sushi bars have a tank holding live fish and other seafood of the day, though. It's a pretty mundane thing.

          • Eisenstein 5 days ago

            If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).

            Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?

            • Dalewyn 5 days ago

              You're not grasping the point, which I don't necessarily blame you.

              Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.

              That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.

              When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.

              It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."

              • Eisenstein 5 days ago

                I grasp the point just fine, but you haven't convinced me that it is correct.

                The issue most people would have with seeing the sausage being made isn't necessarily watching the slaughtering process but with seeing pieces of the animal used for food that they would not want to eat.

                • Dalewyn 5 days ago

                  But isn't that the point? If someone is fine eating something so long as he is ignorant or naive, doesn't that point to a detachment from reality?

                  • Eisenstein 5 days ago

                    I wouldn't want to eat a cockroach regardless of whether I saw it being prepared or not. The point I am making is that 'feeling sick' and not wanting to eat something isn't about being disconnected from the food. Few people would care if you cut off a piece of steak from a hanging slab and grilled it in front of them, but would find it gross to pick up all the little pieces of gristle and organ meat that fell onto the floor, grind it all up, shove it into an intestine, and cook it.

                    • ImPostingOnHN 5 days ago

                      > Few people would care if you cut off a piece of steak from a hanging slab

                      The analogy here would be watching a live cow get slaughtered and then butchered from scratch in front of you, which I think most Western audiences (more than a few) might not like.

                      • Eisenstein 5 days ago

                        A cow walks into the kitchen, it gets a captive bolt shoved into its brain with a person holding a compressed air tank. Its hide is ripped off and it is cut into two pieces with all of its guts on the ground and the flesh and bones now hang as slabs.

                        I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.

                        Then if you were to scoop up all the leftover, non-steak bits from the ground with shovels, throw it all into a giant meat grinder and then take the intestines from a pig, remove the feces from them and fill them with the output of the grinder, cook that and serve it to the other half of the crowd, then a statistically larger proportion of that crowd would not want to eat that compared to the ones who ate the steak.

                        • ImPostingOnHN 5 days ago

                          > I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.

                          I am asserting that the majority of western audiences, including Americans, would dislike being present for the slaughtering and butchering portion of the experience you describe.

                        • BehindBlueEyes 3 days ago

                          I'm a 100% sure none of my colleagues would eat the steak if they could see the live cow get killed and skinned first. They wouldn't go to that restaurant to begin with and they'd lose their appetite entirely if they somehow made it there.

                          I probably also wouldn't want to eat that, but more because that steak will taste bad without being aged properly.

                        • kenjackson 5 days ago

                          You’re just going down the list of things that sound disgusting. The second sounds worse than the first but both sound horrible.

                          • Eisenstein 4 days ago

                            Sorry I got a bit too involved in the discussion and just should have let it go a long time ago.

                      • abduhl 5 days ago

                        Most audiences wouldn’t like freshly butchered cow - freshly butchered meat is tough and not very flavorful, it needs to be aged to allow it to tenderize and develop.

                        • ImPostingOnHN 5 days ago

                          The point is that most Western audiences would likely find it unpleasant to be there for the slaughtering and butchering from scratch.

                          • Dalewyn 5 days ago

                            That the point is being repeated to no effect ironically illustrates how most modern people (westerners?) are detached from reality with regards to food.

                            • Eisenstein 5 days ago

                              To me, the logical conclusion is that they don't agree with your example and think that you are making connections that aren't evidenced from it.

                              I think you are doing the same exact thing with the above statement as well.

                              • Dalewyn 5 days ago

                                In the modern era, most of the things the commons come across have been "sanitized"; we do a really good job of hiding all the unpleasant things. Of course, this means modern day commons have a fairly skewed "sanitized" impression of reality who will get shocked awake if or when they see what is usually hidden (eg: butchering of food animals).

                                That you insist on contriving one zany situation after another instead of just admitting that people today are detached from reality illustrates my point rather ironically.

                                Whether it's butchering animals or mining rare earths or whatever else, there's a lot of disturbing facets to reality that most people are blissfully unaware of. Ignorance is bliss.

                                • abduhl 5 days ago

                                  To be blunt, the way you express yourself on this topic comes off as very "enlightened intellectual." It's clear that you think that your views/assumptions are the correct view and any other view is one held by the "commons"; one which you can change simply by providing the poor stupid commons with your enlightened knowledge.

                                  Recall that this whole thread started with your proposition that seeing live fish prepared in front of someone "will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in." You had no basis for this as far as I can tell, it's just a random musing by you. A number of folks responded disagreeing with you, but you dismissed their anecdotal comments as being wrong because it doesn't comport with your view of the unwashed masses who are, obviously, feeble minded sheep who couldn't possibly cope with the realities of modern food production in an enlightened way like you have whereby you "enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me." How noble of you. Nobody (and I mean this in the figurative sense not the literal sense) is confused that the slab of meat in front of them was at one point alive.

                                  Then you have the audacity to accuse someone of coming up with "zany" situations? You're the one that started the whole zany discussion in the first place with your own zany musings about how "western" "commons" think!

                  • sensanaty 5 days ago

                    I grew up with my farmer grandpa who was a butcher, and I've seen him butcher lots of animals. I always have and probably always will find tongues & brains disgusting, even though I'm used to seeing how the sausage is made (literally).

                    Some things just tickle the brain in a bad way. I've killed plenty of fish myself, but I still wouldn't want to eat one that's still moving in my mouth, not because of ickiness or whatever, but just because the concept is unappealing. I don't think this is anywhere near as binary as you make it seem, really.

        • shadowerm 2 days ago

          No irony in this comment lol.

    • Shorel 5 days ago

      > Becoming aware of this is super damaging to our concept of normal life.

      Not being aware of this is also a cause of traffic accidents. People should be more careful driving.

      • BehindBlueEyes 3 days ago

        You can be aware without having to see the most gruesome parts of it to a point where it is traumatizing and damaging.

        I've seen the crumpled metal of a car, I don't need to see the people inside to know it is not good.

    • sandworm101 5 days ago

      >> ridiculous how people think war is like Call of Duty.

      It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.

    • portaouflop 5 days ago

      Normal does not exist - it’s just the setting on your washing machine.

  • LeftHandPath 5 days ago

    Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.

    Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.

    Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.

    I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.

    • sandworm101 5 days ago

      A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.

      • wkat4242 5 days ago

        > Footage of people being shot is actually allowed on many platforms.

        It's also part of almost every American cop and military show and movie. Of course it's not real but it looks the same.

        • guerrilla 5 days ago

          > Of course it's not real but it looks the same.

          I beg to differ. TV shows and movies are silly. Action movies are just tough-guy dancing.

          • jnwatson 5 days ago

            "Tough guy dancing" is such an apt phrase.

            The organizer is even called a "fight choreographer".

          • wkat4242 5 days ago

            I mean more the gory parts. Blood, decomposed bodies everywhere etc.

            And I wasn't talking about action hero movies.

        • 7bit 4 days ago

          It absolutely does not look the same. You instinctively know that what you see is just acting. I somehow don't believe that you have seen a real video of a person getting shot or beheaded or sucked into a lathe. Seeing a life getting wiped out is emotionally completely different because that's more than a picture you are emotionally processing. It looks only the same if you have zero empathy or are a psychopath.

  • consumer451 5 days ago

    I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.

    • alex-korr 5 days ago

      I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.

    • Teever 5 days ago

      Yeah I've wondered the same thing about jobs in general too.

      Society would be a very different place if everyone had to do customer service or janitorial work one weekend a month.

      • ANewFormation 5 days ago

        Many (all?) Japanese schools don't have janitors. Instead students clean on rotation. Never been much into Japanese stuff but I absolutely admire this about their culture, and imagine it's part of the reason that Japan is such a clean and at least superficially respectful society.

        Living in other Asian nations where there are often defacto invisible caste systems can be nauseating at times - you have parents that won't allow their children to participate in clean up efforts because their child is 'above handling trash.' That's gonna be one well adjusted adult...

1vuio0pswjnm7 5 days ago

Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.

  • coryrc 5 days ago

    It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.

    • renewiltord 5 days ago

      [flagged]

      • DaSHacka 5 days ago

        You can thank IRL privacy activists for the lack of cameras in every room in each house; Just imagine how much faster domestic disputes could be resolved!

        • renewiltord 5 days ago

          Sure, there’s a cost-benefit to it. We think that privacy is more important than rapid resolution of domestic disputes and we think that privacy is more important than stopping child porn. That’s fine as a statement.

      • IshKebab 5 days ago

        Rubbish. The reason Facebook doesn't want to demand ID for most users is that it adds friction to using their product, which means fewer users and less profit.

        • stickfigure 4 days ago

          Sounds like Facebook and I have the same interest, then.

      • kelseyfrog 5 days ago

        Unsurprising lack of response to this statement. It's 100% true, and any cost-benefit calculation of privacy should account for it.

  • croissants 5 days ago

    A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.

    Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.

    • apitman 5 days ago

      Content hosting for creators can be commoditized.

      Content discovery may even be able to remain centralized.

      No idea if there's a way for it to work out economically without ads, but ads are also unhealthy so maybe that's ok.

      • lalalali 5 days ago

        Introducing a free unlimited hosting service where you could only upload pictures, text or video. There’s a public page to see that content among adds and links to you friends free hosting service pages. TOS is a give-give: you give them the right to extract all the aggregated stat they want and display the adds, they give you the service for free so you own you content (and are legally responsible of it)

    • moomin 5 days ago

      I mean, the technical expertise thing is solvable, it’s just no-one wants to solve it because SaaS is extremely lucrative."

yodsanklai 5 days ago

I'm wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...

What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren't options. AI helps to some extent but doesn't solve the issue entirely.

  • sd9 5 days ago

    I don’t have any experience with this, so take this with a pinch of salt.

    What seems novel about moderation is the frequency that you confront disturbing things. I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit. And as soon as you’re done with one post, the next is right there. I doubt moderators spend more than 30 seconds on the average image, which is an awful lot of stuff to see in one day.

    A doctor just isn’t exposed to that sort of imagery at the same rate.

    • tossandthrow 5 days ago

      > I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.

      On the contrary I would expect that it would be the edge cases that they were shown - why loop in a content moderator if you an be sure that it is prohibited ont he platform without exposing a content moderator?

      In this light, it might make sense why they sue: They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.

      • Retric 5 days ago

        Even if 1% of images are disturbing that’s multiple per hour, let anyone across months.

        US workman’s comp covers PTSD acquired on the job, and these kinds of jobs are rife with it.

      • pizza 5 days ago

        > They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.

        Why assume they're just token diversity hires who don't do useful work..?

        Have you ever built an automated content moderation system before? Let me tell you something about them if not: no matter how good your automated moderation tool, it is pretty much always trivial for someone with familiarity with its inputs and outputs to come up with an input it mis-predicts embarrassingly badly. And you know what makes the biggest difference.. is humans specifying the labels.

        • tossandthrow 4 days ago

          I don't assume diversity hires, I assume that these people work for the Kenyan part of Facebook and that Facebook employs an equivalent workforce elsewhere.

          I am also not saying that content moderation should catch everything.

          What I am saying is that the content moderation teams should ideally decide on the edge cases as they are hard for automated system.

          In turn that also means that these people ought not to be exposed to too hardcore material - as that is easier to classify.

          Lastly I say that if that is not the case - then they are probably not there to carry out a function but to fill a political role.

      • aoanevdus 5 days ago

        Content moderation also involves reading text, so you’d imagine that there’s a benefit to having people who can label data and provide ground truth in any language you’re moderating.

        Even with images, you can have different policies in different places or the cultural context can be relevant somehow (eg. some country makes you ban blasphemy).

        Also, I have heard of outsourcing to Kenya just to save cost. Living is cheaper there so you can hire a desk worker for less. Don’t know where the insistence you’d only hire Kenyans for political reasons comes from.

    • lm28469 5 days ago

      Also a doctor is paid $$$$$ and it mostly is a vocational job

      Content moderator is a min wage job with bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.

      • gruez 5 days ago

        >Also a doctor is paid $$$$$

        >Content moderator is a min wage job

        So it's purely a monetary dispute?

        >bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.

        Many other jobs have the same issues, though admittedly with less frequency, but where do you draw the line?

        • diggan 5 days ago

          > but where do you draw the line?

          How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD

          If a job as a constantly high percentage of people ending up with PTSD, then they aren't equipped well enough to handle it, by the company who employs them.

          • gruez 5 days ago

            >How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD

            I fail to see how this addresses my previous questions of "it's purely a monetary dispute?" and "where do you draw the line?". If a job "Causes PTSD" (whatever that means), then what? Are you entitled to hazard pay? Does this work out in the end to a higher minimum wage for certain jobs? Moreover, we don't have similar classifications for other hazards, some of which are arguably worse. For instance, dying is probably worse than getting PTSD, but the most dangerous jobs have pay that's well below the national median wage[1][2]. Should workers in those jobs be able to sue for redress as well?

            [1] https://www.ishn.com/articles/112748-top-25-most-dangerous-j...

            [2] https://www.bls.gov/oes/current/oes_nat.htm

        • bloppe 5 days ago

          > So it's purely a monetary dispute?

          I wouldn't say purely, but substantially yes. PTSD has costs. The article says some out; therapy, medication, mental, physical, and social health issues. Some of these money can directly cover, whereas others can only be kinda sorta justified with high enough pay.

          I think a sustainable moderation industry would try hard to attract the kinds of people who are able to perform this job without too much negative impacts, and quickly relieve those who try but are not well suited, and pay for some therapy.

    • ajb 5 days ago

      Also doctors are very frequently able to do something about it. Being powerless is a huge factor in mental illness.

    • prng2021 5 days ago

      “I would imagine that companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.”

      This doesn’t make sense to me. Their automated content moderation is so good that it’s unable to detect “almost certainly disturbing shit”? What kind of amazing automation only works with subtleties but not certainties?

      • sd9 5 days ago

        I assumed that, at the margin, Meta would prioritise reducing false negatives. In other words, they would prefer that as many legitimate posts are published as possible.

        So the things that are flagged for human review would be on the boundary, but trend more towards disturbing than legitimate, on the grounds that the human in the loop is there to try and publish as many posts as possible, which means sifting through a lot of disturbing stuff that the AI is not sure about.

        There’s also the question of training the models - the classifiers may need labelled disturbing data. But possibly not these days.

        However, yes, I expect the absolute most disturbing shit to never be seen by a human.

        Again, literally no experience, just a guy on the internet pressing buttons on a keyboard.

        • gruez 5 days ago

          >In other words, they would prefer that as many legitimate posts are published as possible.

          They'd prefer that as many posts are published, but they probably also don't mind some posts being removed if it meant saving a buck. When canada and australia implemented a "link tax", they were happy to ban all news content to avoid paying it.

          • sd9 5 days ago

            Yes, Meta are economically incentivised to reduce the number of human reviews (assuming the cost of improving the model is worthwhile).

            This probably means fewer human reviewers reviewing a firehose, not the same number of human reviewers reviewing content at a slower rate.

    • itake 5 days ago

      I’d think the higher density/frequency of disturbing content would cause people to be desensitized.

      I never seen blood or gore in my life and find seeing it shocking.

      But I’d imagine gore is a weekly situation for surgeons.

      • 0_____0 5 days ago

        I watch surgery videos sometimes, out of fascination. It's not gore to me - sure it's flesh and blood but there is a person whose life is going to be probably significantly better afterwards. They are also not in pain.

        I exposed myself to actual gore vids in the aughts and teens... That stuff still sticks with me in a bad way.

        Context matters a lot.

        • itake 5 days ago

          > They are also not in pain.

          My understanding is that during surgery, your body is most definitely in pain. Your body still reacts as it would to any damage, but anesthetics block the pain signals from reaching the brain.

          • mewpmewp2 5 days ago

            But there is a difference between someone making an effort healing someone else vs content with implications that something really disturbing happened that makes you lose faith in humanity.

            • itake 4 days ago

              Should the Uvalde police sue the school for putting them through that trauma?

      • sd9 5 days ago

        I agree. But that might be comorbid with PTSD. It’s probably not good for you to be _that_ desensitised to this sort of thing.

        I also feel like there’s something intangible regarding intent that makes moderation different from being a doctor. It’s hard for me to put into words, but doctors see gore because they can hopefully do something to help the individual involved. Moderators see gore but are powerless to help the individual, they can only prevent others from seeing the gore.

        • diggan 5 days ago

          It's also the type of gore that matters. Some of the worst stuff I've seen wasn't the worst because of the visuals, but because of the audio. Hearing people begging for their life while being executed surely would feel different to even a surgeon who might be used to digging around in people's bodies.

        • itake 5 days ago

          There are many common situations where professionals are helpless, like people that needs to clean up dead bodies after an accident.

          • sangnoir 5 days ago

            Imagine if this becomes a specialized, remote job where one tele-operates the brain and blood scrubbing robot all workday long, accident, after accident after accident. I am sure they'd get PTSD too, airey, sometime it's just oil and coolant, but there's still a lot of body-tissue involved.

            • itake 4 days ago

              I'd really like to see more data on this. I really think (most) people would be desensitized and not become hyper sensitive to this content.

              Another fascet to this is the moderators willfully agreed to review this content and had full autonomy to leave the job at any point.

      • crawfordcomeaux 5 days ago

        Desensitization is only one stage of it. It's not permanent & requires dissociation from reality/humanity on some level. But that stuff is likely to come back and haunt one in some way. If not, it's likely a symptom of something deeper going on.

        My guess is that's why it's after bulldozing hundreds of Palestinians, instead of 1 or 10s of them, that Israeli soldiers report PTSD.

        If you haven't watched enough videos of the ongoing genocides in the world to realize this, it'll be a challenge to have a realistic take on this article.

    • mrweasel 4 days ago

      > I imagine companies like Meta have such good automated moderation

      I imagine that they have a system that is somewhere between shitty and none functional. This is the company that will more often than not flag marketplace posts as "Selling animal", either completely at random or because the pretty obvious "from an animal free home" phrase is used.

      If they can't get this basic text parsing correct, how can you expect them to correctly flag images with any real sense of accuracy?

  • fcmgr 5 days ago

    A friend's friend is a paramedic and as far as I remember they can take the rest of a day off after witnessing death on duty and there's an obligatory consulation with a mental healthcare specialist. From reading the article, it seems like those moderators are seeing horrific things almost constantly throughout the day.

    • _qua 5 days ago

      I've never heard of a policy like that for physicians and doubt it's common for paramedics. I work in an ICU and a typical day involves a death or resuscitation. We would run out of staff with that policy.

      • wongarsu 5 days ago

        Maybe it's different in the US where ambulances cost money, but here in Germany the typical paramedic will see a wide variety of cases, with the vast majority of patients surviving the encounter. Giving your paramedic a day off after witnessing a death wouldn't break the bank. In the ICU or emergency room it would be a different story.

        • _qua 5 days ago

          Ambulances cost money everywhere, it's just a matter of who is paying. Do we think paramedics in Germany are more susceptible to PTSD when patients die than ICU or ER staff, or paramedics anywhere?

          • wongarsu 5 days ago

            > Ambulances cost money everywhere

            Not in the sense that matters here: the caller doesn't pay (unless the call is frivolous), leading to more calls that are preemptive, overly cautious or for non-live-threatening cases. That behind the scenes people and equipment are paid for and a whole structure to do that exists isn't really relevant here

            > Do we think paramedics in Germany are more susceptible to PTSD

            No, we think that there are far more paramedics than ICU or ER staff, and helping them in small ways is pretty easy. For ICU and ER staff you would obviously need other measures, like staffing those places with people less likely to get PTSD or giving them regular counseling by a staff therapist (I don't know how this is actually handled, just that the problem is very different than the issue of paramedics)

      • kranke155 5 days ago

        Maybe a different country than yours ?

      • fcmgr 4 days ago

        I might have misremembered that, but remember hearing the story. Now that I think about it I think that policy was applied only after unsuccessful CPR attempts.

    • magicalhippo 5 days ago

      My friend has repeatedly mentioned his dad became an alcoholic due to what he saw as a paramedic. This was back in the late 80s, early 90s so not sure they got any mental health help.

    • Ray20 5 days ago

      Sounds crazy. Just imagine dying because paramedic responsible for your survival just wanted end his day early.

  • smackay 5 days ago

    I expect first responders rarely have to deal with the level of depravity mentioned in this Wired article from 2014, https://www.wired.com/2014/10/content-moderation/

    You probably DO NOT want to read it.

    There's a very good reason moderators are employed in far-away countries, where people are unlikely to have the resources to gain redress for the problems they have to deal with as a result.

  • gcr 5 days ago

    Burnout, PTSD, and high turnover are also hallmarks of suicide hotline operators.

    The difference? The reputable hotlines care a lot more about their employees' mental health, with mandatory breaks, free counseling, full healthcare benefits (including provisions for preventative mental health care like talk therapy).

    Another important difference is that suicide hotlines are decoupled from the profit motive. As more and more users sign up to use a social network, it gets more profitable and more and more load needs to be borne by the human moderation team. But suicide and mental health risk is (roughly) constant (or slowly increasing with societal trends, not product trends).

    There's also less of an incentive to minimize human moderation cost. In large companies, some directors view mod teams as a cost center that takes away from other ventures. In an organization dedicated only to suicide hotline response, a large share of the income (typically fundraising or donations) goes directly into the service itself.

  • Spooky23 5 days ago

    In many states, pension systems give police and fire service sworn members a 20 year retirement option. The military has similar arrangements.

    Doctors and lawyers can’t afford that sort of option, but they tend to embrace alcoholism at higher rates and collect ex-wives.

    Moderation may be worse in some ways. All day, every day, you see depravity at scale. You see things that shouldn’t be seen. Some of it you can stop, some you cannot due to the nature of the rules.

    I think banning social media isn’t an answer, but demanding change to the algorithms to reduce the engagement to high risk content is key.

  • athrowaway3z 5 days ago

    I'm not sure your comparisons are close enough to be considered precedents.

    My guess is even standing at the ambulance drive in of a big hospital, you'll not see as much horrors in a day as these people see in 30 minutes.

    • s1artibartfast 5 days ago

      My friends who are paramedics have seen some horrific scenes. They have also been shot, stabbed, and suffered lifelong injuries.

      They are obviously not identical scenarios. They have similarities and they also have differences.

    • lazide 5 days ago

      Outside of some specific cities, I can guarantee it. Even a busy Emergency Dept on Halloween night had only a small handful of bloody patients/trauma cases, and nothing truly horrific when I did my EMT rotation.

  • danielheath 4 days ago

    Trauma isn’t just a function of what you’ve experienced, but also of what control you had over the situation and whether you got enough sleep.

    Being a doctor and helping people through horrific things is unlike helplessly watching them happen.

    IIRC, PTSD is far more common among people with sleep disorders, and it’s believed that the lack of good sleep is preventing upsetting memories from being processed.

  • insane_dreamer 5 days ago

    at least in the US, those jobs - doctors, cops, firefighters, first responders - are well compensated (not sure about prison guards), certainly compared to content moderators who are at the bottom of the totem pole in an org like FB

    • caymanjim 5 days ago

      What does compensation have to do with it? Is someone who stares at thousands of traumatizing, violent images every day going to be less traumatized if they're getting paid more?

      • t-3 5 days ago

        Yes, they will be much more able to deal with the consequences of that trauma than someone who gets a pittance to do the same thing. A low-wage peon won't even be able to afford therapy if they need it.

      • unaindz 5 days ago

        At least they can pay for therapy and afford to stop working or find another job

    • BoxFour 5 days ago

      Shamefully, first responders are not well compensated - usually it's ~$20 an hour.

      • kevin_thibedeau 5 days ago

        I've lived places where the cops make $100k+. It all depends on location.

        • BoxFour 5 days ago

          Sorry - I'm specifically referring to EMTs and Paramedics, who usually make somewhere in the realm of $18-25 an hour.

    • stickfigure 4 days ago

      In my district, all the firefighers are volunteers (including me). Yeah, we deal with some crappy medical calls and sometimes deaths. It's nowhere near as dramatic as the non-first-responders in this thread seem to think.

  • Nasrudith 4 days ago

    I suspect what makes it different is the concentration to just the flagged is what turns it into especially traumatizing. Of course there is probably a bell curve of sorts for "experiences" vs "level of personal trauma". One incident might be enough for someone "weaker" to develop PTSD. Not a slight on the afflicted, just how things are.

    Casual Facebook viewers may stumble across something disturbing on it, but they certainly don't get PTSD at the rate of the poor moderators. Likewise the professionals usually have their own professional exposure levels to messed up stuff. Meanwhile child pornography investigation departments who have to catalogue the evidence are notorious for suffering poor mental health even with heavy measures taken.

    There is already the 'blacklist hash' approach to known bad images which can help reduce exposure. So they don't all need to be exposed to say the same cartel brutal execution video, the bot takes care of it. I don't know anything about Facebook's internal practices but I would presume they already are doing this or similar with their tech hiring base.

    Dilution is likely the answer for how to make it more palitible and less traumatizing. Keep the really bad stuff exposure at similar proportions to what other careers experience. Not having 'report button' and 'checking popular content' as separate tasks and teams would probably help a little bit. I suspect the moderators wouldn't be as traumatized if they just had to click through trending posts all day. A dillution approach would still have to deal with the logistical trade-offs for what could be viable. Increasing the moderation payroll a hundred-fold and making them work at effectively 1% efficiency would make for better moderator experiences, but Facebook would be understandably reluctant to go from 5% revenue content moderation budget to 50% revenue content moderation.

  • whaleofatw2022 5 days ago

    From those I know that worked in the industry, contractor systems are frequently abused to avoid providing the right level of counseling/support to moderators.

  • evertedsphere 5 days ago

    I think part of it is the disconnection from the things you're experiencing. A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong. A content moderator is getting images beamed into their brain that they have no preparation for, of situations that they have no connection to or power over.

    • rrr_oh_man 5 days ago

      > A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong.

      That's bullshit. Ever talked to a paramedic or firefighter?

  • hnlmorg 4 days ago

    Frequency plus lack of post traumatic support.

    A content moderator for Facebook will invariably see more depravity and more frequently than a doctor or police officer. And likely see far less support provided by their employers to emotionally deal with it too.

    This results in a circumstance where employees don’t have the time nor the tools to process.

  • siliconc0w 5 days ago

    ER docs definitely get PTSD. Cops too.

  • croes 5 days ago

    Doctors, cops, first responders, prison guards see different horrible things.

    Content moderators see all of that.

  • sangnoir 5 days ago

    As other sibling comments noted: most other jobs don't have the same frequent exposure to disturbing content. The closest are perhaps combat medics in an active warzone, but even they usually get some respite by being rotated.

  • DocTomoe 5 days ago

    Doctors, cops, first responders, prison guards, soldiers etc also just so happen to be the most likely groups of people to develop PTSD.

  • amelius 5 days ago

    Don't forget judges, especially the ones in this case ...

    And it used to be priests who had to deal with all the nasty confessions.

    • Cumpiler69 4 days ago

      Judges get loads of compensation and perks.

  • apitman 5 days ago

    > surely banning social media or not moderating content aren't options

    Why not? What good has social media done that can't be accomplished in some other way, when weighed against the clear downsides?

    That's an honest question, I'm probably missing lots.

    • Nasrudith 4 days ago

      Would we really be better served with media returning to being un-interactive and unresponsive? Where just getting on TV was something of note instead of everyone being on the internet. Where there was widespread downright cultish obsession with celebrities. The "We interrupt this news live from Iraq for celebrity getting out of prison news" era.

      I think not. The gatekeepers of the old media pretty much died for a reason, that they seriously sucked at their job. Open social media and everyone having a camera in their pocket is what allowed us to basically disprove UFO sightings and prove routine police abuse of power.

    • yodsanklai 5 days ago

      Billions of people use them daily (facebook, instagram, X, youtube, tiktok...). Surely we could live without them like we did not long ago, but there's so much interest at play here that I don't see how they could be banned. It's akin to shutting down internet.

quesomaster9000 4 days ago

The Kenyan moderators' PTSD reveals the fundamental paradox of content moderation: we've created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it's trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn't a bug that better process or technology can fix - it's the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.

  • gadflyinyoureye 4 days ago

    Any idea what our next steps are? It seems like we stop the experiment of mass communication, try to figure out a less damaging knowledge-based filtering mechanism (presently executed by human), or throw open the flood gates to all manner of trauma inducing content and let the viewer beware.

    • noch 4 days ago

      > Any idea what our next steps are? [..] try to figure out a less damaging knowledge-based filtering mechanism [..]

      It should cost some amount of money to post anything online on any social media platform: pay to post a tweet, article, image, comment, message, reply.

      (Incidentally, crypto social networks have this by default simply due to constraints in how blockchains work.)

      • BriggyDwiggs42 4 days ago

        This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about. Wherever you set the threshold will determine a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people. Is that your goal?

        • noch 4 days ago

          > a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people.

          "Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?

          > This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about.

          There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.

          The article specifically says that: "The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege."

          > Is that your goal?

          Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.

          If it turns out that you can find a group of people so poor that they simultaneously have valuable things to say online yet can't afford to post, then you can start a non-profit or foundation to subsidize "poor people online". (Hilariously, the crypto-bros do this when they're trying to incentivize use of their products: they set aside funds to "sponsor" thousands of users to the tune of tens of millions of dollars a year in gas refunds, airdrops, rebates and so forth.)

          • BriggyDwiggs42 4 days ago

            > "Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?

            We would probably disagree on what change we think is beneficial, but in terms of catalyzing the changes I find appealing, I see plenty of it myself. I'm not sure how I could dig up a study on something like this, but I'm operating on the assumption that more poor people would advocate the changes I'm interested in than rich, because the changes I want would largely be intended to benefit the former, potentially at the expense of the latter. I see this assumption largely confirmed in the world. That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change. The effect depends on what model you use to price social media use, how high you set the prices, how you regulate the revenue, etc, but I think the effect needs to be mitigated. In essence, my primary concern with this idea is that it may come from an antidemocratic impulse, not a will to protect moderators. If you don't possess that impulse, then I'm sorry to be accusing you of motives you don't possess, and I'll largely focus on the implementation details that would best protect the moderators while mitigating the suppression of discourse.

            >you can start a non-profit or foundation to subsidize "poor people online".

            Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved. The issue, I think, is that there isn't enough money or awareness to go around to solve all niche financially-addressable problems. Issues have to have certain human-interest characteristics, then be carefully and effectively framed, to attract contributions from regular people. As such, I wouldn't want to artificially create a new problem, where poverty takes away basically the only meaningful voice a regular person has in the modern age, then expect somebody to come along and solve it with a charitable foundation. Again, if charity is this effective, then let's just start a foundation to provide pay and care to moderators. Would it attract contributions?

            >the crypto-bros do this when they're trying to incentivize use of their products

            The crypto-bros trying to incentivize use of their products have a financial incentive to do so. They're not motivated by the kindness of their own hearts. Where's the financial incentive to pay for poor people to post online?

            >There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.

            Most implementations of this policy would largely impact bot farms. If posts cost money, then there's a very big difference in the cost of a botnet and a normal account. Costs would be massively higher for a bot farm runner, and relatively insubstantial for a normal user. Such a policy would then most effectively suppress bots, and maybe the most extreme of spammers.

            What I don't understand, then, is the association between bots/spammers and the shock garbage harming moderators. From what I know, bots aren't typically trying to post abuse, but to scam or propagandize, since they're run by actors either looking for a financial return or to push an agenda. If the issue is spammers, then I'd question whether that's the cause of moderator harm; I'd figure as soon as a moderator sees a single gore post, the account would get nuked. We should expect then that the harm is proportionate to the number of accounts, not posts.

            If the issue is harmful accounts in large quantity, and easy account creation, then to be effective at reducing moderator harm, wouldn't you want to charge a large, one-time fee at account creation? If it costs ten dollars to make an account, bad actors would (theoretically) be very hesitant to get banned (even though in practice this seems inadequate to, e.g., suppress cheating in online games). I'd also be relatively fine with such a policy; nearly anyone could afford a single 5-10 usd fee for indefinite use, but repeat account creators would be suppressed.

            >Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.

            I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation. The current poor treatment of moderators is a supply-demand issue, it's a relatively low-skill job and people are hungry, so you can treat them pretty bad and still have a sufficient workforce. They are also, if I'm correct, largely outsourced from places with worse labor protections. This gives the social media companies very little incentive to pay them more or treat them better.

            An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post, such that a normal user may pay in the realm of 5 usd in a month of use, but a spammer or bot farm would have to spend vastly more. Furthermore, but very important, require that this additional revenue be spent directly on the pay or healthcare of the moderation team.

            In reality, though, I'd be very worried that this secondary regulation wouldn't enter or make it through a legislature. I'm also concerned that the social media companies would be the ones setting the prices. If such a cost became the norm, I expect that these companies would implement the cost-to-post as a subscription to the platform rather than a per-post price. They would immediately begin to inflate their prices as every subscription-based company currently does to demonstrate growth to shareholders. Finally, they'd pocket the gains rather than paying more to the moderators, since they have absolutely zero incentive to do anything else. I think this would cause the antidemocratic outcomes I'm concerned with.

            My question for you, then, is whether you'd be interested in government regulation that implements a flat per-post or per-account-creation fee, not much more than 5usd monthly or 10usd on creation, not adjustable by the companies, and with the requirement that its revenue be spent on healthcare and pay for the moderation team?

            • noch 4 days ago

              Your reply is rather long so I'll only respond to 2 sections to avoid us speculating randomly without actually referring to data or running actual experiments.

              To clarify:

              > That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change.

              I suggested making it cost something. "Expensive" is a relative term and for some reason you unjustifiably assumed that I'm proposing "expensive", however defined. Incentive design is about the marginal cost of using a resource, as you later observed when you suggested $5.

              We often observe in real life (swimming pools, clubs, public toilets, hiking trails, camping grounds) that introducing a trivial marginal cost often deters bad actors and free-loaders[^0]. It's what's referred to in ideas such as "the tragedy of the commons".

              > An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post

              Yes that's a marginal cost, which is what I suggested. So basically, we agree. The rest is implementation details that will depend on jurisdiction, companies, platforms and so forth.

              > I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation.

              Without data or case studies to reference, I can't speculate about that and other things that are your opinions but thank you for thinking about the proposal and responding.

              > Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved.

              I don't mean to sound rude but have you tried to solve the problem and start a foundation? Why is it some mysterious wealthy benefactor or other people who should solve it rather than you who cares about the problem? Why do you expect to see others and not yourself, solving it?

              Raising funds from wealthy people for causes is much easier than people imagine.

              ---

              [^0]: https://en.wikipedia.org/wiki/Free-rider_problem

      • gadflyinyoureye 4 days ago

        But this would necessarily block out the poorest voices. While one might say that it is fine to block neonazi red necks, there are other poor people out their voicing valid claims.

      • smokel 4 days ago

        How will this help?

        • noch 4 days ago

          The article says:

          > The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.

          You might have heard the saying, common in policy and mechanism design: "Show me the incentives, and I'll show you the outcome."

          If you want to reduce spam, you increase the marginal cost of posting spam until it stops. In general if you introduce a small cost to any activity or service, the mere existence of the cost is often a sufficient disincentive to misuse.

          But, you can think through implications for yourself, no? You don't need me to explain how to think about cause and effect? You can, say, think about examples in real life, or in your own town or building, where a service is free to use compared to one that has a small fee attached, and look at who uses what and how.

      • bdangubic 4 days ago

        assumption here is the people posting vile shit are also broke&homeless?

  • makeitdouble 4 days ago

    Reducing the sheer volume is still a critically important step.

    You're right that fundamentally there's an imbalance between the absolute mass of people producing the garbage, and the few moderators dealing with it. But we also don't have an option to just cut everyone's internet.

    Designing platforms and business models that inherently produce less of the nasty stuff could help a lot. But even if/when we get there, we'll need automated mechanisms to ask people if they really want to be jerks, absolutely need to send their dick picks, or let people deal with sheer crime pics without having to look at them more than two seconds.

  • sharken 4 days ago

    The proper knee-jerk reaction would be to ban this kind of work, but that would also mean disallowing content sharing on Facebook.

    That is why this type of work will not go away.

    And AI is just not good enough to do this, I fully agree.

    • quesomaster9000 4 days ago

      One of the unfortunate realities is that sometimes you need to be exposed to how grim reality can be as the alternative is living in a delusional bubble. However, one of the underlying points I was getting to is that often what is considered 'acceptable exposure' is highly politicized simply because control is attempting to be absolute and all encompassing. To me, comes across as overtly paternalistic especially when you start looking at the contradictions of 'good bad' vs 'bad bad' and why it is the way it is. I find it disappointing that we aren't allowed to self-censor, and even if we wanted to there simply aren't the tools available necessary for empowering people to make their own decisions at the point of consumption, but rather we employ filtering at the point of distribution which shifts the burden of decisions onto the platform and enact laws that focus that power even further into a limited number of hands.

pluc 5 days ago

Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.

  • HenryBemis 5 days ago

    You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.

    I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.

    • chimeracoder 5 days ago

      > You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.

      Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.

      Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.

      > everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.

      This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content. Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.

      But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.

      • whacko_quacko 5 days ago

        Sure, but who did the proactive flagging back then? Probably moderators. Seems like a shitty job nonetheless

      • bigfatkitten 5 days ago

        As far as I can tell, Facebook is still terrible.

        I have a throwaway Facebook account. In the absence of any other information as to my interests, Facebook thinks I want to see flat earth conspiracy theories and CSAM.

        When I report the CSAM, I usually get a response that says "we've taken a look and found that this content doesn't go against our Community Standards."

    • pluc 5 days ago

      Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).

azinman2 5 days ago

> The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents

Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.

  • abdullahkhalids 4 days ago

    Some products have factories in multiple countries. For example, Teslas are produced in both US and China. The cars produced in both countries are more or less identical in quality. But do you ever see that the market price of the product is different depending on the country of manufacture?

    If the moderators in Kenya are providing the same quality labor as those from the US, why the difference in price of their labor?

    I have a friend who worked for FAANG and had to temporarily move from US to Canada due to visa issues, while continuing to work for the same team. They were paid less in Canada. There is no justification for this except that the company has price setting power and uses it to exploit the sellers of labor.

    • azinman2 4 days ago

      A million things factor into market dynamics. I don’t know why this is such a shocking or foreign concept. Why is a waitress in Alabama paid less than in San Francisco for the same work? It’s a silly question because the answers are both obvious and complex.

  • meiraleal 5 days ago

    > Why would pay in different countries be equivalent?

    Why 8 times less?

    • tyre 5 days ago

      GDP per capita in Kenya is a little less than $2k. In the United States, it’s a bit over $81k.

      Median US salary is about $59k. Gross national income (not an identical measure but close) in Kenya about $2.1k.

      1/8th is disproportionately in favor of the contractors, relative to market.

    • renewiltord 5 days ago

      Because that’s the only reason why anyone would hire them. If you’ve ever worked with this kind of contract workforce they aren’t really worth it without massive cost-per-unit-work savings. I suppose one could argue it’s better that they be unemployed than work in this job but they always choose otherwise when given the choice.

    • apitman 5 days ago

      Because people chose to take the jobs, so presumably they thought it was fair compensation compared to alternatives. Unless there's evidence they were coerced in some way?

      Note that I'm equating all jobs here. No amount of compensation makes it worth seeing horrible things. They are separate variables.

      • fluoridation 5 days ago

        No amount? So you wouldn't accept a job to moderate Facebook for a million dollars a day? If you would, then surely you would also do it for a lower number. There is an equilibrium point.

        • apitman 5 days ago

          > So you wouldn't accept a job to moderate Facebook for a million dollars a day?

          I would hope not.

          • fluoridation 5 days ago

            Sorry, but I don't believe you. You could work for a month or two and retire. Or hell, just do it for one day and then return to your old job. That's a cool one mill in the bank.

            My point is, job shittiness can be priced in.

            • lalalali 4 days ago

              > work for a month or two and retire --> This is a dream of many, but there exist a set of people that really like their job and have no intention to retire

              > just do it for one day and then return to your old job. --> Cool mill in the bank and dreadful images in your head. Perhaps Apitman feels he has enough cash and wont be happier with a million (more?).

              Also your point is true but lacks of Facebook interest to elevate that number. I guess it was more a theorical reflexion than an argument for concrete economie.

    • wyager 5 days ago

      Because prices are determined by supply and demand

      • meiraleal 5 days ago

        The same is true for poverty and the poor that will work for any amount, the cheap labor the rich needs to make riches.

  • guerrilla 5 days ago

    > Why would pay in different countries be equivalent?

    Because it's exploitative otherwise. It's just exploiting the fact that they're imprisoned within borders.

    • gruez 5 days ago

      You haven't actually explained why it's bad, only slapped an evil sounding label on it. What's "exploitative" in this case and why is it morally wrong?

      >they're imprisoned within borders

      What's the implication of this then? That we remove all migration controls?

      • guerrilla 5 days ago

        Of course. Not all at once, but gradually over time like the EU has begun to do. If capital and goods are free to move, then so must labor be. The labor market is very far from free if you think about it.

    • mcntsh 5 days ago

      Interesting perspective. I wonder if you yourself take part in the exploitation by purchasing things made/grown in poor countries due to cost.

      • numpad0 5 days ago

        vegans die of malnutrition.

      • throwie21873 5 days ago

        There's no ethical consumption under capitalism.

        • fluoridation 5 days ago

          If that's the case then there can also be no ethical employment, either, both for employer and for employee. So that would seem to average out to neutrality.

          • mock-possum 4 days ago

            This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.

            “There is no ethical X under capitalism” is not license to stick our heads in the sand and continue to consume without a second thought for those who are being exploited. It’s a reminder that things need to change, not only in all the little tiny drop-in-a-bucket ways that individuals can afford to contribute.

            • guerrilla 4 days ago

              Exactly. It means that we must continue to act ethically within the system that is the way it is now, which we must accept, while at the same time doing our best to change that system for the better. It's a "why not both" situation.

            • fluoridation 3 days ago

              >This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.

              That has nothing to do with the ethics of capitalism, though. The poor becoming poorer and the rich becoming richer is not a foregone conclusion of a capitalist society, nor is it guaranteed not to happen in a non-capitalist society.

    • MacsHeadroom 5 days ago

      Paying local market rates is not exploitative.

      • guerrilla 5 days ago

        Artificially creating local market rates by trapping people is.

        • lvzw 5 days ago

          In what sense were these local rates "created artificially"? Are you suggesting that these people are being forced to work agaisnt their will?

          • guerrilla 4 days ago

            In the sense that I named twice above. ;)

          • mock-possum 4 days ago

            Nah bro you’re misunderstanding me, because if the worker said "no" then the answer obviously is "no,”

            But the thing is she's not gonna say "no", she would never say "no" because of the implication.

    • jnwatson 5 days ago

      It is also exploiting the fact that humans need food and shelter to live and money is used to acquire those things.

      • guerrilla 5 days ago

        That's only exploitation if you combine it with fact of the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.

        • gruez 5 days ago

          >the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.

          How would land allocation work without "enclosure of the commons"? Does it just become a free-for-all? What happens if you want to use the land for grazing but someone else wants it for growing crops? "enclosure of the commons" conveniently solves all these issues by giving exclusive control to one person.

          • guerrilla 5 days ago

            Elinor Ostrom covered this extensively in her Nobel Prize-winning work if you are genuinely interested. Enclosure of the commons is not the only solution to the problems.

            • gruez 5 days ago

              [flagged]

        • apitman 5 days ago

          That's actually an interesting question. I would love to see some data on whether it really is impossible for the average person to live off the land if they wanted to.

          An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.

          • fluoridation 5 days ago

            >An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.

            Do you mean for everyone to be hunter-gatherers? Yes, that would be impossible. If you mean for a smaller number then it depends on the number.

            • apitman 5 days ago

              Yeah I think it would be interesting to know how far over the line we are.

              • fluoridation 5 days ago

                Probably way, way over the line. Population sizes exploded after the agricultural revolution. I wouldn't be surprised if the maximum is like 0.1-1% of the current population. If we're talking about strictly eating what's available without any cultivation at all, nature is really inefficient at providing for us.

oefrha 5 days ago

They should probably hire more part time people working one hour a day?

Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.

throwaway48476 5 days ago

When people are protected from the horrors of the world they tend to develop luxury beliefs which leads them to create more suffering in the world.

  • kelseyfrog 5 days ago

    Conversely, those who are subjected to harsh conditions often develop a cynical view of humanity, one lacking empathy, which also perpetuates the same harsh conditions. It's almost like protection and subjection aren't the salient dimensions, but rather there is some other perspective that better explains the phenomenon.

  • s1artibartfast 5 days ago

    I tend to agree with growth through realism, but people often have the means and ability to protect themselves from these horrors. Im not sure you can systemically prevent this without resorting to big brother shoving propaganda in front of people and forcing them to consume it.

    • throwaway48476 4 days ago

      I don't think it needs to be forced, just don't censor so much.

      • s1artibartfast 4 days ago

        Isn't that forcing? Who decides how much censorship people can voluntarily opt into?

        If given control, I think many/most people would opt into a significant amount of censorship.

  • kdmtctl 4 days ago

    Just scrolled a lot to find this. And I do believe that moderators in a not so safe country seen a lot in their lives. But this also should make them less vulnerable for this kind of exposures and looks like it is not.

    • throwaway48476 4 days ago

      Seeing too much does cause PTSD. All I'm arguing is that some people love in a fantasy world where bad things don't happen so they end up voting for ridiculous things.

      • kdmtctl 4 days ago

        Isn't this a reason for Meta to outsource to countries where people somewhat immune to the first world problems? Aside of corruption and work force neglect?

        • throwaway48476 3 days ago

          Meta outsources because it's cheaper.

          • kdmtctl 3 days ago

            Exactly. Everything I mentioned is decreasing the price.

para_parolu 5 days ago

One of few fields where AI is very welcome

  • hinkley 5 days ago

    I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.

    If it’s already bad blurred or as a thumbnail don’t click on the real thing.

    • bigfatkitten 5 days ago

      This is more or less how police do CSAM classification now. They start with thumbnails, and that's usually enough to determine whether the image is a photograph or an illustration, involves penetration, sadism etc without having to be confronted with the full image.

    • EasyMark 5 days ago

      I'd be fine with that as long as it was something I could turn off and on at will

  • sunaookami 5 days ago

    No, this just leads to more censorship without any option to appeal.

    • henry2023 5 days ago

      We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.

      Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.

    • slothtrop 5 days ago

      > without any option to appeal.

      Why would that be?

      Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).

      Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.

      • Eisenstein 5 days ago

        Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.

        • sunaookami 5 days ago

          This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.

        • slothtrop 5 days ago

          > because there are no ways to appeal

          I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.

          • Eisenstein 5 days ago

            > I already addressed this and you're talking over it.

            You didn't address it, you handwaved it.

            > Why are you making the assumption that AI == no appeal and zero staff?

            I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.

            I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.

            > That makes zero sense, one has nothing to do with the other.

            Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.

            > The human element comes in for appeal process.

            What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.

            • slothtrop 5 days ago

              > I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.

              Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.

              You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.

              Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.

              > it is cheaper

              A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.

              > the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.

              It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.

              You're jumping to conclusions. That is the entire point of my response.

              > What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?

              AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.

              • Eisenstein 4 days ago

                > Their copyright takedown system has been around for many years and wasn't contingent on AI.

                So what? It could rely on tea leaves and leprechauns, it illustrates that whatever automation works will be relied on at the expense of any human staff or process

                > it 99% likely violates their terms of use, in which case there's nothing to say.

                Isn't that 1% the edge cases I am specifically mentioning are important and won't get addressed?

                > doesn't mean they'll be employed if customers dislike it.

                The customers on ad supported internet platforms are the advertisers and they are fine with it.

                > You're jumping to conclusions. That is the entire point of my response.

                Conclusions based on solid reason and evidenced by past events.

                > AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.

                Until you realize that 2% of 2.89billion monthly users is 57,800,000.

    • SoftTalker 5 days ago

      Nobody has a right to be published.

      • sunaookami 5 days ago

        Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.

        • granzymes 5 days ago

          Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.

        • SoftTalker 5 days ago

          The government is not obligated to publish your speech. They just can't punish you for it (unless you cross a few fairly well-defined lines).

        • chollida1 5 days ago

          > Then what is freedom of speech if every platform deletes your content?

          Freedom of speech is between you and the government and not you and a private company.

          As the saying goes, if don't like your speach I can tell you to leave my home, that's not censorship, that's how freedom works.

          If I don't like your speach, I can tell yo to leave my property. Physical or virtual.

        • henry2023 5 days ago

          If this was the case then Facebook shouldn’t be liable to moderate any content. Not even CSAM.

          Each government and in some cases provinces and municipalities should have teams to regulate content from their region?

        • mrguyorama 3 days ago

          This has always been the case. If the monks didn't want to copy your work, it didn't get copied by the monks. If the owners of a printing press didn't want to print your work, you didn't get to use the printing press. If Random House didn't want to publish your manifesto, you do not get to compel them to publish your manifesto.

          The first amendment is multiple freedoms. Your freedom of speech is that the government shouldn't stop you from using your own property to do something. You are free to print out leaflets and distribute them from your porch. If nobody wants to read your pamphlets that's too damn bad, welcome to the free market of ideas buddy.

          The first amendment also protects Meta's right of free association. Forcing private companies to platform any content submitted to them would outright trample their right. Meta has a right to not publish your work so that they can say "we do not agree with this work and will not use our resources to expand it's reach".

          We have, in certain cases, developed system that treats certain infrastructure as a regulated pipe that is compelled to carry everything, like with classic telephone infrastructure. The reason for that, is it doesn't make much sense to require every company to put up their own physical wires, it's dumb and wasteful. Social networks have zero natural monopoly and should not be treated as common carriers.

    • soulofmischief 5 days ago

      Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.

    • krior 5 days ago

      Curious, do you have a better solution?

      • throw__away7391 5 days ago

        The solution to most social media problems in general is:

        `select * from posts where author_id in @follow_ids order by date desc`

        At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".

        • mulmen 5 days ago

          I'd love a chronological feed but if you gave me a choice I'd get rid of lists in SQL first.

          > select * from posts where author_id in @follow_ids order by date desc

          SELECT post FROM posts JOIN follows ON posts.author_id = follows.author_id WHERE follows.user_id = $session.user_id;

    • para_parolu 4 days ago

      How it’s different from some random guy in Kenia? Not like you will ask him to double check results.

    • jsemrau 5 days ago

      That's a workflow problem.

  • 29athrowaway 5 days ago

    And then the problem is moved to the team curating data sets.

  • gorbachev 4 days ago

    Until the AI moderator flags your home videos as child porn, and you lose your kids.

    • para_parolu 4 days ago

      Not sure why anyone should have access to my home videos: ai or not.

  • antegamisou 5 days ago

    You know what is going to end up happening though is something akin to the Tesla's "autonomous" Optimus robots.

  • itake 5 days ago

    Maybe.. apple had a lot of backlash using AI to detect CSAM.

    • Cyph0n 5 days ago

      Wasn’t the backlash due to the fact that they were running detection on device against your private library?

      • threeseed 5 days ago

        Yes. As opposed to running it on their servers like they do now.

        And it was only for iCloud synced photos.

        • Zak 5 days ago

          There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.

          • FabHK 5 days ago

            The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".

            • ImPostingOnHN 5 days ago

              > we will upload your pictures unencrypted and do with them as we like

              Curious, I did not realize Apple sent themselves a copy of all my data, even if I have no cloud account and don't share or upload anything. Is that true?

              • FabHK 2 days ago

                No. The entire discussion only applies to images being uploaded (or about to be uploaded) to iCloud. By default in iOS all pictures are saved locally only (so the whole CSAM scanning discussion would not have applied anyway), but that tends to fill up a phone pretty quickly.

                With the (optional) iCloud, you can (optionally) activate iCloud Photos to have a photo library backed up in the cloud and shared among all your devices (and, in particular, with only thumbnails and metadata stored locally and the full resolution pictures only downloaded on demand).

                These are always encrypted, with either the keys being with Apple ("Standard Data Protection") so that they're recoverable when the user loses phone or password, or E2E ("Advanced Data Protection") if the user so choses, thus irrecoverable.

                It seems to me that in the latter case images are not scanned at all (neither on device nor in the cloud), and it's unclear to me whether they're scanned in the former case.

                https://support.apple.com/en-us/102651

              • itake 5 days ago

                Apple doesn't do this. But other service providers do (Dropbox, Google, etc).

                Other service providers can scan for CSAM from the cloud, but Apple cannot. So Apple might be one of the largest CSAM hosts in the world, due to this 'feature'.

                • ImPostingOnHN 4 days ago

                  > Other service providers can scan for CSAM from the cloud

                  I thought the topic was on-device scanning? The great-grandparent claim seemed to be that Apple had to choose between automatically uploading photos encrypted and not scanning them, vs. automatically uploading photos unencrypted and scanning them. The option for "just don't upload stuff at all, and don't scan it either" was conspicuously absent from the list of choices.

                  Why, do other phone manufacturers do this auto-upload-and-scan without asking?

                  • Zak 3 days ago

                    I think FabHK is saying that Apple planned to offer iCloud users the choice of unencrypted storage with server-side scanning, or encrypted storage with client-side scanning. It was only meant to be for things uploaded to iCloud, but deploying such technologies for any reason creates a risk of expansion.

                    Apple itself has other options, of course. It could offer encrypted or unencrypted storage without any kind of scanning, but has made the choice that it wants to actively check for CSAM in media stored on its servers.

          • aaomidi 5 days ago

            And introduces avenues for state actors to force the scanning of other material.

            This was also during a time where Apple hadn’t pushed out e2ee for iCloud, so it didn’t even make sense.

            • shadowgovt 5 days ago

              This ship has pretty much sailed.

              If you are storing your data in a large commercial vendor, assume a state actor is scanning it.

              • kotaKat 5 days ago

                I'm shocked at the amount of people I've seen on my local news getting arrested lately for it and it all comes from the same starting tip:

                "$service_provider sent a tip to NCMEC" or "uploaded a known-to-NCMEC hash", ranging from GMail, Google Drive, iCloud, and a few others.

                https://www.missingkids.org/cybertiplinedata

                "In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."

                • shadowgovt 5 days ago

                  And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.

                  This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.

                  There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.

                  • tzs 5 days ago

                    > And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.

                    I wouldn't be so sure.

                    When Apple was going to introduce on-device scanning they actually proposed to do it in two places.

                    • When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.

                    • The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:

                    1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.

                    2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.

                    3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.

                    4. If the child says yes the message is unblocked and the parents are notified.

                    This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.

                    • shadowgovt 4 days ago

                      If it's the EFF, I think they went out on a limb on this one that not a lot of American parents would agree with. "People have the right to communicate privately without backdoors or censorship, including when those people are minors" (emphasis mine) is a controversial position. Arguably, not having that level of privacy is the curtailment on children's rights.

                  • mrguyorama 3 days ago

                    >The US has no such cultural narrative.

                    The cultural narrative is actually extremely popular in a 10% subset of the population that is essentially fundamentalist christian who are terrified of the government branding them with "the mark of the beast".

                    The problem is that their existence actually poisons the discussion because these people are absurd loons who also blame the gays for hurricanes and think the democrats eat babies.

          • itake 5 days ago

            Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.

            • mock-possum 4 days ago

              Yeah it’s on by default and I’m not even sure how to turn off the visual lookup feature :/

              Yet another reason why my next phone will be an android.

    • sneak 5 days ago

      No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)

      That’s a very very different issue.

      I support big tech using AI models running on their own servers to detect CSAM on their own servers.

      I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.

      It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.

      • skissane 5 days ago

        > Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}

        It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.

        Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.

        [0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...

        [1] https://www.law.cornell.edu/uscode/text/18/1466A

        [2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...

        • blackeyeblitzar 5 days ago

          Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?

          • skissane 5 days ago

            > However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity

            If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.

            In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.

            If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.

      • itake 5 days ago

        My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.

        People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.

      • dialup_sounds 5 days ago

        Weird take. The point of on-device scanning is to enable E2EE while still mitigating CSAM.

        • sneak 5 days ago

          No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.

          If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.

          • dialup_sounds 4 days ago

            I don't think the first amendment obligates companies to let you share kiddie porn via their services.

            • sneak 3 days ago

              You misunderstand me. The issue is that Apple is theoretically being retaliated against, by the state, if they were to publish non-backdoored e2ee software.

              Apple does indeed in theory have a right to release whatever iOS features they like. In practice, they do not.

              Everyone kind of tacitly acknowledged this, when it was generally agreed upon that Apple was doing the on-device scanning thing "so they can deploy e2ee". The quiet part is that if they didn't do the on-device scanning and released e2ee software without this backdoor (which would then thwart wiretaps), the FBI et al would make problems for them.

              • dialup_sounds 2 days ago

                Why would Apple want to add E2EE to iCloud Photos without CSAM detection?

                • sneak 12 hours ago

                  The same reason they made iMessage e2ee, which happened many years before CSAM detection was even a thing.

                  User privacy. Almost nobody trades in CSAM, but everyone deserves privacy.

                  Honestly, this isn’t about CSAM at all. It’s about government surveillance. If strong crypto e2ee is the hundreds-of-millions-of-citizens device default, and there is no backdoor, the feds will be upset with Apple.

                  This is why iCloud Backup (which is a backdoor in iMessage e2ee) is not e2ee by default and why Apple (and the state by extension) can read all of the iMessages.

                  • dialup_sounds 11 hours ago

                    I didn't ask why they would want E2EE. I asked why they would want E2EE without CSAM detection when they literally developed a method to have both. It's entirely reasonable to want privacy for your users AND not want CSAM on your servers.

                    > Honestly, this isn't about CSAM at all.

                    It literally is the only thing the technology is any good for.

    • PittleyDunkin 5 days ago

      I don't think the problem there is the AI aspect

      • itake 5 days ago

        My understanding was the FP risk. Everything was on device. People designed images that were FPs of real images.

        • PittleyDunkin 5 days ago

          FP? Let us know what this means when you have a chance. Federal Prosecution? Fake Porn? Fictional Pictures?

          • thinkmassive 5 days ago

            My guess is False Positive. Weird abbreviation to use though.

    • pluc 5 days ago

      Probably because you need to feed it child porn so it can detect it...

      • hirvi74 5 days ago

        Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.

        If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.

    • llm_trw 5 days ago

      Apple had a lot of backlash by using AI to scan every photo you ever took and sending it back to the mothership for more training.

jkestner 5 days ago

Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.

  • rnewme 5 days ago

    That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.

    • jkestner 5 days ago

      Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.

      Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech. https://www.wheresyoured.at/never-forgive-them/

      • rnewme 4 days ago

        Definitely. It's a perfect mix of factors to enable dark sides of our personas. I belive everyone has certain level of near sociopathic perverse curiosity, and certain amount of need to push the limits, if there are no consequences for such behaviors. Algorithms can only affect so much. But gore sites, efukt, countless whatsapp/facebook/signal/whatever group that teens post vile things in are mostly due to childish morbid curiosity and not due to everyone being a literal psycho.

        I'll take a look at the essay, thanks.

    • kelseyfrog 5 days ago

      Is there any way to look at this that doesn't resort to black or white thinking? That's a rather extreme view in itself that could use some nuance and moderation.

      • rnewme 4 days ago

        I'm not very good with words so I can only hope the reader will be able to understand that things are not black and white, but that its a spectrum that depends on countless factors, cultural, societal and other.

  • sellmesoap 5 days ago

    What's more; popular TV shows regularly have scenes that could cause trauma, the media has been ramping up the intensity of content for years. I think it's simply seeking more word of mouth 'did you see GoT last night? Oh my gosh so and so did such and such to so and so!'

    • aquariusDue 5 days ago

      It really became apparent to me when I watched the FX remake of Shogun, the 1980 version seems downright silly and carefree by comparison.

omoikane 4 days ago

Possibly related, here is an article from 2023-06-29:

https://apnews.com/article/kenya-facebook-content-moderation... - Facebook content moderators in Kenya call the work 'torture.' Their lawsuit may ripple worldwide

I found this one while looking for salary information on these Kenyan moderators. This article mentioned that they are being paid $429 per month.

shadowgovt 5 days ago

Good! I hope they get every penny owed. It's an awful job and outsourcing if to jurisdictions without protection was naked harm maximization.

blueflow 5 days ago

I'm curious about the contents that these people moderated. What is it that seeing it fucks people up?

  • crystal_revenge 5 days ago

    From the first paragraph of the article:

    > post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.

    If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.

    Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.

    And do remember to browse for 8-10 hours a day.

    edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".

    • dyauspitr 5 days ago

      As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.

      • numpad0 5 days ago

        These accounts like yours and this report of PTSD don't line up. Both of them are credible. What's driving them crazy but not Old Internet vets?

        Could it be:

          - the fact that moderators are hired and paid  
          - that kids are young and a lot more tolerant  
          - that moderators aren't intended audiences  
          - backgrounds, sensitivity in media at all  
          - the amount, of disturbing images  
          - the amount, in total, not just bad ones  
          - anything else?
        
        Personally, I'm suspecting that difference in exposure to _any kind of media_ might be a factor; I've come across stories online that imply visiting and staying at places like Tokyo can almost drive people crazy, from the amount of stimuli alone.

        Doesn't it sound a bit too shallow and biased to determine it was specifically CSAM or whatever specific type of data that did it?

        • dyauspitr 4 days ago

          Because intent and perspective play a huge role is how we feel about things. Some people are excited for war and killing their enemies makes them feel better about themselves whereas others come back from war broken and traumatized. A lot of it depends on how you or others frame that experience for you.

      • sandspar 5 days ago

        The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.

      • azinman2 5 days ago

        Did your parents know what you were seeing? Advice to others to not have kids see this kind of stuff, let alone get desensitized to it?

        What drew you to 4chan?

        • dyauspitr 5 days ago

          Of course not. What drew me in was the edginess. What kept me there was the very dark but funny humor. This was in 2006-2010, it was all brand new, it was exciting.

          I have a kid now and my plan is to not give her a smartphone/social media till she’s 16 and heavily monitor internet access until she’s atleast 12. Obviously I can’t control what she will see with friends but she goes to a rigorous school and I’m hoping that will keep her busy. Other than that I’m hoping the government comes down hard on social media access to kids/teenagers and all the restrictions are legally codified by the time she’s old enough.

      • 6yyyyyy 5 days ago

        That fucking guy torturing monkeys :(

  • bdangubic 5 days ago

    things that you cannot unsee, the absolute worst of humanity

  • kernal 5 days ago

    There was a report by 60 minutes (I think) on this fairly recently. I’m not surprised the publicity attracted lawyers soon after.

pllbnk 5 days ago

There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.

I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.

BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.

Yeah, it's bad business for them but it's a win for the people.

wkat4242 5 days ago

I have several friends who do this work for various platforms.

The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).

The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.

All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.

Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.

  • ternnoburn 5 days ago

    Someone has to do it is a strong claim. We could not have the services that require it instead.

    • EdgeSlash 5 days ago

      Absolutely. The platforms could reasonably easy stop allowing anonymous accounts. They don’t because more users means more money.

      • ternnoburn 5 days ago

        Not what I was saying. I'm questioning the need for the thing entirely.

      • wkat4242 5 days ago

        Uhh no I'm not giving up my privacy because a few people want to misbehave. Screw that. My friends know who I am but the social media companies shouldn't have to.

        Also, it'll make social media even more fake than it already is. Everyone trying to be as fake as possible. Just like LinkedIn is now. It's sickening, all these people toting the company line. Even though they do nothing but complain when you speak to them in person.

        And I don't think it'll actually solve the problem. People find ways to get through the validation with fake IDs.

  • Der_Einzige 5 days ago

    So brown/black people in the third world who often find that this is their only meaningful form of social mobility are the "someone" by default? Because that's the de-facto world we have!

    • wkat4242 5 days ago

      That's not true at all. All the people I speak of are here in Spain. They're generally just young people starting a career. Many of them end up in the fringes of cybersecurity work (user education etc) actually because they've seen so many scams. So it's the start of a good career.

      Sure some companies would outsource also to africa but it doesn't mean this work is only available to third-world countries. And there's not that many jobs in it. It's more than possible to be able to find enough people that can stomach it.

      There was another article a few years back about the poor state of mental health of Facebook moderators in Berlin. This is not exclusively a poor people problem. More of a wrong people for the job problem.

      And of course we should look more at why this is the only form of social mobility for them if it's really the case.

Eumenes 5 days ago

What do you call ambulance chasers, but they go after tech companies? Cause this is that.

CuriousRose 4 days ago

I wonder if using AI to turn images and video into a less realistic style before going to the moderators, while preserving the image content will work to reduce trauma as it creates an artificial barrier from seeing human torture. We used to watch cartoons as kids with people being blown to pieces.

kittikitti 4 days ago

Reddit mods could learn a thing or two from these people.

atleastoptimal 5 days ago

Obvious job that would benefit everyone for AI to do instead of humans.

xvector 5 days ago

This is the one job we can probably automate now.

toomanyrichies 5 days ago

One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".

The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].

Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.

1. https://en.wikipedia.org/wiki/Model_collapse

percentcer 5 days ago

it's kinda crazy that they have normies doing this job

  • istjohn 5 days ago

    Normies? As opposed to who?

    • medvezhenok 5 days ago

      [flagged]

      • loriverkutya 5 days ago

        I’m not sure what is behind your assumption, if it’s the Autistic people does not have empathy myth, please read up on the topic.

        • wkat4242 5 days ago

          Autistic people do have empathy, it just works differently. Most of them are actually really caring, just not very good at showing it. Nor at picking up others' feelings. But they do care about them in my experience.

          Most of them I know will have more difficulty with this type of work, not less. Because they don't tend to process it as well. This includes myself as I do have some autistic tendencies. No way I could do this.

        • ternnoburn 5 days ago

          Autistic people often have stronger empathy that neurotypical people, sometimes much much stronger. Especially towards animals.

      • xvector 5 days ago

        I'd wager they'd still have PTSD, but wouldn't be able to communicate it as well as a normal person.

        What you really want is AI doing this job. Or psychopaths/unempathetic people if that's not an option.

blackeyeblitzar 5 days ago

[flagged]

  • prng2021 5 days ago

    You think people who took these jobs had a list of job offers and were jumping for joy to be able to pick this one out? Or that they stuck with it after the first 5 minutes of moderating necrophilia because they believed other jobs would have similar downsides? You’re so out of touch with the real world and hardships people face trying to make a living for themselves and their family.

    • janderson215 5 days ago

      I’m curious of other perspectives and conclusions on this.

      Why do you think Facebook is the responsible party and not the purveyors of the content that caused them PTSD? From my perspective, Facebook hired people to prevent this content from reaching a wider audience. Thanks for any insight you can provide.

      • prng2021 5 days ago

        I never said Facebook is the responsible party. I’m saying these workers deserve our sympathy and I’m saying it’s not a case of people who had a simple choice but willingly chose a job that caused them PTSD.

        I don’t think Facebook is blameless though. They practically brag about their $40B of AI spend per year and absolutely brag about how advanced their AI is. You can’t focus some of your R&D to flag content that’s instantly recognizable as disgusting content, like pedophilia, necrophilia, and beastiality? There’s already a ton of pre-labeled data they can use from all these workers. No, they don’t get a pass on that. I think it’s shameful they focus all their AI compute and engineering on improving targeted ads and not put a major focus on solving this specific problem that’s directly hurting so many people.

        • janderson215 5 days ago

          Very good point. Thanks for taking the time to respond and for your thoughtfulness!

      • HarryHirsch 5 days ago

        Maybe the solution is that Facebook shouldn't exist. It solves both the problem of distribution and the problem of moderation.

        • blackeyeblitzar 5 days ago

          If others want to moderate why should these complainers get in the way? They are free to not take the job, which obviously involves looking at repulsive content so others don’t have to. Most people don't have a problem with social media existing or moderators having the job of a moderator.

          • janderson215 4 days ago

            It’s a little funny that we have heard concerns about AI eliminating jobs but now we’re hearing they’re concerned AI hasn’t taken this particular job.

            Makes me wonder what the number of people is who also have this job who don’t want this lawsuit to happen for exactly that reason - that their job will be eliminated.

            120 people out of how many? Tens of thousands globally?

        • janderson215 5 days ago

          While that would solve the problem within Facebook, I think you're kidding yourself if you think that's going to stop the demand or supply of horrible content.

  • AriedK 5 days ago

    At first glance you may have a point. Thing is they’re often recruited with very promising job titles and descriptions, training on mild cases. Once they fully realize what they got themselves into the damage has been done. If they’re unlucky, quitting also means losing their house. This may help empathize a bit with their side of this argument.

  • gklitz 5 days ago

    If you pay someone to deliver post, and they get their leg blown of because you order them to go through a minefield, you can’t just avoid responsibility by going “that’s what they signed up for” obviously the responsibility of ensuring that the job can be carried out not physically and r safe is with the employer and workers are well within reason to demand compensation if the employer hasn’t ensured the job can be safely carried out.

    • eesmith 5 days ago

      I think a better example is mining, where miners received no safety equipment, and the mines were not built with safety foremost.

      The idea was, if you didn't like it, leave. If you wanted safety equipment, buy it yourself. Or leave. Can't work due to black lung disease partially from poor ventilation the company was responsible for? You're fired; should have left years ago.

      There are still people who believe the contract is all that counts, nothing else matters, and if you don't like it, leave.

  • throw_m239339 5 days ago

    > It’s the job they signed up for. I don’t understand the complaint. If they don’t want to do the part of the job that is obviously core to it, they should move on. The mass diagnosis just seems like a tactic to generate “evidence”. And the mention of pay compared to other countries makes this look like a bad faith lawsuit to get more compensation.

    its also their right to sue their employer for damage if they believe it affected them in a extremely harmful way. signing up for a job doesnt make the employer above the law.

    But some here can't fathom that workers also have rights.

  • thrance 5 days ago

    Exploited people of the world should just pull themselves up by their bootstraps and work harder to get what they want, like you did?

    • blackeyeblitzar 5 days ago

      They aren’t exploited. They’re paid money in return for viewing and filtering content for others. They could not apply or decline the offer and look at other jobs. The availability of this job doesn’t change the rest of their employment options. But it’s pretty clear what this job is. If it was just looking at friendly content, it wouldn’t need to exist.

      • crystal_revenge 5 days ago

        Exploitation nearly always involves paying. Plenty of people caught up in sex trafficking still get paid, they just don't have a viable way out. Plenty of people working in sweat shops still get paid, but again not enough with enough viable alternatives to get out.

        • blackeyeblitzar 5 days ago

          You’re still not acknowledging the key points - that it is obvious up front that the job fundamentally involves looking at content others don’t want to, and that it is a new job that can be accepted or avoided without taking away from other employment opportunities. Therefore it doesn’t match these other situations you’re drawing a comparison to.

          • tovej 5 days ago

            Most of these people are forced to take these jobs, because nothing else is available, they don't have the power to avoid this job. You cannot make a principled decision if your basic needs, or those of your family are not met. In fact, many well-off, privileged people who are simply stressed cannot make principled decisions if their livelihood is at stake.

            The world is not a tabula rasa where every decision is made in isolation, you can't just treat this like a high school debate team exercise.

            Not acknowledging the social factors at play is bordering on bad faith in this case. The social conditions of the moderators is _the_ key factor in this discussion. The poorer you are, the more likely you are to be forced to take a moderator job, the more likely you are to get PTSD. Our social and economic systems are essentially punishing people for being poor.

efitz 5 days ago

I have a lot of questions.

The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.

Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?

Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.

My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.

  • marcinzm 5 days ago

    So you're expecting these people to have the deep knowledge of human psychology to know ahead of time that this is likely to cause them long term PTSD, and the impact that will have on their lives, versus simply something they will get over a month after quitting?

    • efitz 5 days ago

      I don’t think it takes any special knowledge of human psychology to understand that horrific images can cause emotional trauma. I think it’s a basic due diligence question that when considering establishing such a position, one should consult literature and professionals to discover what impact there might be and what might be done to minimize it.

bdangubic 5 days ago

I wish they get trillion dollars but I am sure they signed their life away via waivers and whatnots when they got the job :(

  • zuminator 5 days ago

    Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.

neilv 5 days ago

If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.

(I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)

  • DiggyJohnson 5 days ago

    How else would you stop the perpetrators?

    • abdullahkhalids 5 days ago

      Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.

      Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.

      • shadowgovt 5 days ago

        Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).

        • abdullahkhalids 5 days ago

          The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.

          From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.

          • shadowgovt 5 days ago

            That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.

            • abdullahkhalids 5 days ago

              That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.

              On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.

        • mu53 5 days ago

          Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.

      • throwie21873 5 days ago

        > Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large.

        The #2 and #3 most popular Mastodon instances allow CSAM.

  • thrance 5 days ago

    If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.

    • richrichie 5 days ago

      They are not wrong. Do you know any mechanism other than markets that work at scale and that don’t cost a bomb and don’t involve abusive central authority?

      • thrance 5 days ago

        Tech billionaires usually advocate for some kind of return to the gilded age, with minimal workers rights and corporate tax. Markets were freer back then, how did that work out for the average man? Markets alone don't do anything for the average quality of life.

        • richrichie 5 days ago

          Quality of life for the average man now is way better than it was at any time in history. A fact.

          • thrance 5 days ago

            But is it solely because of markets? Would deregulation improve our lives further? I don't think so, and that is what I am talking about. Musk, Bezos, Andreessen and cie. are advocating for a particular laissez-faire flavor of capitalism, which historically has been very bad for the average man.

  • llm_trw 5 days ago

    [flagged]

    • almog 5 days ago

      "More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism."

      What part here are you suggesting is similar to seeing two men kissing?

      • Der_Einzige 5 days ago

        Not defending this person in particular, but you should take a look at how anti-LGBT, including at the government level, most countries are in Africa. Maybe a decent number of them do regard seeing homosexuality as PTSD inducing.

        There are several places where they legally can and will kill you for homosexuality.

        • almog 4 days ago

          The existence of anti-LGBTQ wasn't where their argument was leading to though.

          Their line of logic was that our society moral values are but social construct, that change from place to place and with time, therefore being exposed to sexual violence, child abuse, gore etc. is PTSD inducing only because we're examining it through our limited perspective whereas it's quite possible that all these things those FB mods were exposed to can be perfectly normal in some cultures.

          I wanted to see where that argument would lead them to, as in, what kind of people would FB should have hired that would be resistant to all these horrible stuff, but other than to let me know that in fact there are such cultures, I never got a straight answer out of them.

      • llm_trw 5 days ago

        [flagged]

        • almog 5 days ago

          But we're not talking about an 80's christian mom since you proceeded to make the observation that "People are outraged by whatever they are told to be outraged over and ignore everything that they aren't".

          Which is to say, being exposed to extreme violent/abusive content could only cause PTSD iff one is subject to a specific social construct that define these acts in a certain way. Let's just assume you're right, what kind of employees would that imply are immune to getting PTSD from such content given your previous observation?

          • llm_trw 5 days ago

            The answer is in the question: Whoever has been raised in a culture that doesn't make a big deal out of whatever the given content is.

            • almog 5 days ago

              And what culture is that?

        • JTyQZSnP3cQGa8B 5 days ago

          You're lying and you know it. I remember the 80s as much as anyone else. Especially the part where Elton John and Freddy Mercury where at the peak of their popularity, unless you where living in a religious shithole but that was (and still is) a small part of the world.

          • llm_trw 5 days ago

            In 1980 75% of adults thought that homosexuality was always wrong and 20% that it was never wrong or only sometimes wrong.

            https://lgbpsychology.org/html/gss4.html

            https://lgbpsychology.org/html/prej_prev.html

            Your feelings about the period mean nothing.

            • noduerme 5 days ago

              Seeing something you think is culturally wrong is not necessarily traumatizing, is it? And surely there are degrees of "wrongness", ranging from the merely uncomfortable to the truly gross to the utterly horrifying. Even within the horrifying category, one can differentiate between things like a finger being chopped off and e.g. disembowelment. It's reasonable to expect a person would be more traumatized the further up the ladder of horror they're forced to look.

              • llm_trw 5 days ago

                This would be believable if not for the fact that hanging gutting and quartering was considered good wholesome family entertainment to watch while getting fast food in the market not three centuries ago literally everywhere.

                • noduerme 5 days ago

                  How did that go away, if people only do what they're brought up to do?

                  • llm_trw 5 days ago

                    We found out that keeping people homeless to die of exposure slowly was much more effective at keeping the majority in line.

                    • noduerme 2 days ago

                      No one is actively trying to keep people homeless or watch them die on the streets, much less use that as a lesson to keep people in line. Drawing and quartering is way more scary and effective if you want to make people obey. You're just mixing nihilistic bitterness with outright nonsense.

            • wkat4242 5 days ago

              Depends seriously on the country, the Netherlands was way ahead there. In many ways more ahead than it is now because it has become so conservative lately.

          • ipaddr 5 days ago

            In what 80s fantasy were you living in where gay people were open. Rumor was John was bisexual and Freddy getting aids in the late 80s was a huge deal. Queen's peak is around 1992 with Wayne's world. No men kissed on stage or in movies neither did women.

        • FireBeyond 5 days ago

          It might have induced 'disgust', but no, two men kissing didn't give 1980s Christian moms actual PTSD.

    • Yiin 5 days ago

      You have no idea what content is being discussed here if you even think about bringing identity politics into this topic.

    • UniverseHacker 5 days ago

      Equating outrage to PTSD is absolute nonsense. As someone that lives with a PTSD sufferer, it is an extremely severe and debilitating condition that has nothing to do with “outrage” and can’t be caused by seeing people kiss.

      • llm_trw 5 days ago

        PTSD is what happens when you see someone standing next to you reduced to a chunky red salsa in a split second.

        The idea that seeing images of that can match the real thing can only be said by people who haven't smelled the results.

        • UniverseHacker 5 days ago

          You’re very wrong, it can be caused by different things to different people- as the causes are emotional it requires severe emotional trauma, which does not have to happen through a specific category of event- a lot of different types of trauma and abuse can cause it.

          It’s hard to imagine a more disgusting thought process than someone trying to gatekeep others suffering like you are doing here.

          • fsckboy 5 days ago

            actually, not everybody gets PTSD in for example a combat situation, and Gabor Mate says that people who do develop PTSD are the people who have already suffered traumas as children; in a sense, childhood trauma is a preexisting condition.

            • UniverseHacker 5 days ago

              A lot of PTSD is also not from combat all all- childhood emotional trauma alone can cause it. This is recognized now, but it took a while because initially it was discovered in war veterans and categorically excluded other groups- eventually they discovered that war wasn’t unique in causing the condition.

              However, I would point out that Mates’ views are controversial, and don’t fully agree with other research on trauma and PTSD. He unrealistically associates essentially all mental illness and neurodivergence with childhood trauma, even in cases where good evidence contradicts that view. He claims ADHD is caused by childhood emotional trauma, although that is proven not to be the case, so I don’t put much stock in his scientific reasoning abilities- he has his hammer and sees everything as a nail.

          • llm_trw 5 days ago

            You were literally gatekeeping ptsd from Christian mom's not one post ago.

            • UniverseHacker 5 days ago

              HN ethos is to assume good faith, but my imagination is failing me here as to how you might be sincere and not trolling- can you please share more info to help me out?

              What makes you think people have experienced clinically diagnosed or diagnosable PTSD from seeing someone kiss? Has anyone actually claimed that?

              You used the word outrage, and again outrage is not trauma- it describes an outer reaction, not an inner experience. They’re neither mutually exclusive nor synonymous.

              Your assertion seems to be that only being physically present for a horrific event can be emotionally traumatic- that spending years sitting in a room watching media of children being brutally murdered and raped day in and day out is not possibly traumatic, but watching someone kiss that you politically think should be banned from doing so, can be genuinely traumatic?

    • shadowgovt 5 days ago

      In this context, this is dangerously close to asserting "people are only outraged about CSAM because they're told to be." I don't think that's what you mean.

      • llm_trw 5 days ago

        It is exactly what I mean.

        If you don't nurture that outrage every day than you'd be rather surprised what can happen to a culture in a single generation.

        • shadowgovt 5 days ago

          I'm trying to interpret this post in the best light and, regrettably, I'm failing.

          Can you clarify what you think the change to society will be if we expose more people online to CSAM and normalize it?

        • noduerme 5 days ago

          I think your logic is backwards. The main reason for a culture to ban pedophilia is because it causes trauma in children. For thousands of years, cultures have progressed towards protecting children. This came from a natural sense of outrage in a majority of people, which became part of the culture. Not vice versa. In many of your comments, you seem to assume that people are only automatons who think and do exactly what their culture teaches them, but that's not the truth. The culture is made up of individuals, and individual conscience frequently - thankfully - overrides cultural diktat. Otherwise no dictatorship would ever fall, no group of people would ever be freed, and no wicked practices would ever be stamped out. It has always been individual people acting against the culture whose outrage has caused the culture to change. Which strongly implies that people's sense of outrage is at least partly intrinsic to human nature, totally apart from cultural practices of the time.

          • llm_trw 5 days ago

            I'm now old enough to have seen people who treated homosexuals in the 1980s the same we treat pedophiles today start waving rainbow flags and calling the people they beat up for being gay in highschool Nazis.

            There maybe a few people with principles who stick with them.

            The majority will happily shove who ever they are told to in a gas chamber.

            • noduerme 5 days ago

              I'm not saying there aren't a lot of people who are natural conformists, who do whatever they're told to, and hate or love whatever the prevailing culture hates or loves. They may be a majority. And yes, a prevailing culture can take even the revulsion of murder out of people to some extent (although check out the state sanctioned degree of alcohol and drug use among SS officers and you'll see it's not quite so easy to make people do acts of murder and torture every day).

              What I am saying is that the conformists don't drive the culture, they're just a blunt weapon of whoever is driving the culture. That weapon can be turned toward gay rights or toward burning people at the stake, but what changes a culture are the individuals with either a conscience or the individuals with wicked plans. Both of which exist outside the mainstream in any time and place.

              Maybe another way of saying this is that I think most people are capable of murder and most people are capable of empathy (and therefore trauma) with someone being tortured, but primarily they're concerned with being a good guy. What throws the arc of history towards a higher morality is that maybe >0% of people naturally need to perceive themselves as "good" by defending life and the dignity and humanity of other people, to the extent that needing to be a good person overrides their cultural programming. And those are not the only people who change a culture, but 51% of the time they change it for the better instead of worse.

              That's just my view on it.

            • shadowgovt 5 days ago

              Wait, why are they calling gay people Nazis? This story is very unclear. And I can't see how it relates to CSAM and the moderators who have to see it, which is a categorically different issue to homosexuality, so different as to be completely unconflatable.

sneak 5 days ago

Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?

Not everyone can be a forensic investigator or coroner, too.

I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.

  • wruza 5 days ago

    It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.

    I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.

    The fact that not less than 100% got PTSD should be telling something here.

  • eesmith 5 days ago

    The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.

    Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.

    In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."

    Do those people you know look at horrible pictures on the internet for 8-10 hours each day?

  • luqtas 5 days ago

    perhaps life on Kenya isn't easy as yours?

  • Sharlin 5 days ago

    [flagged]

    • sneak 5 days ago

      I didn’t make any claims about me. Read it again, more carefully, before making personal attacks.

      • doublerabbit 5 days ago

        > Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?

        Perhaps these are jobs people are forced to do because the price of labour isn't as rich as other countries, trafficked and the likes.

        > I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.

        Looking at is different to moderating. I've sen my fair share of snuff from the first iraqi having their head cut off in 2005 all the way down to: ogrish/liveleak, goatse, tubgirl, 2girls1cup shock sites.

        But when you are faced with imagery of gruesome material day-in day-out on 12-hour shifts if not longer non-stop, being paid very little, it would take a toll on anyone.

        I've done it, lone-wolf sysop for a adult dating website for two years and the stuff I saw was moderate but still made me feel mentally disturbed. The normality wears off very quickly.

        Could you work a five days week looking at extreme obscenity imagery for $2 an hour?

        • blackeyeblitzar 5 days ago

          The alternative is they have no job. And it is clear what this job entails, so complaining about the main part of the job afterwards, as this small group of moderators is doing, seems disingenuous.

    • blackeyeblitzar 5 days ago

      Leave these empty personal attacks off HN, please. Respond substantively.