• KoboldCoterie@pawb.social
    link
    fedilink
    English
    arrow-up
    119
    arrow-down
    2
    ·
    19 hours ago

    Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

    Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

    This feels like an awful argument to make. It’s not the presence of those things that make Meta and co so shit, it’s the fact that they provably understood the risks and the effects that their design was having, knew that it was harming people, and continued to do it anyway. I don’t care if we’re talking about a little forum run by a Grandma and Grandpa talking about their jam recipes; if they know that they’re causing harm and don’t change their behavior, they should be liable.

      • KoboldCoterie@pawb.social
        link
        fedilink
        English
        arrow-up
        20
        ·
        17 hours ago

        It’s like if someone had a forum where insurrectionists were discussing how to build bombs and where they were going to use them, and the owners had an internal meeting where they said, “Hey, we’re hosting some pretty awful people, should we maybe report them or shut this down?” and the answer was, “Nah, they’re paying users, and we want their money.”

        Pretty sure Section 230 wouldn’t protect them, either.

    • Chulk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      29
      ·
      18 hours ago

      Yeah this feels very much like, “censor content, but don’t change Meta’s practices”

      Which begs the question, does the author know what they’re cheering for?

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      18 hours ago

      It’s like he’s describing a slot machine with unpainted wheels, leaving out the context that it’s in a casino with a big “paint me and enjoy a share of the profit” sign above it.

      The social media machine was designed to be a self-serve addiction generator. It intentionally used every trick it could legally get away with.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 hours ago

      Also they can now generate content without users, which they already do a lot on Facebook.

    • lmmarsano@group.lt
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      9 hours ago

      I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.

      Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.

      • KoboldCoterie@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        11 hours ago

        Addictive Personality is a proposed set of traits that makes sufferers more vulnerable to developing addictive behaviors, including things like gambling or social media. Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?

        Telling those people to just have self control is like telling someone with depression to just stop being sad.

  • Sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    18 hours ago

    Surprise surprise. If you go through Techdirt’s archives, you can see Mike Masnick has spent thousands of words losing his shit any and every time Facebook has faced ANY criticism. I don’t know if he has a financial interest in them (like he does with Bluesky) but the moment someone suggests reining them in, here comes Masnick to defend one of the richest, most lawyer-ed up companies around.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      20
      ·
      18 hours ago

      Mike Masnick is on the Bluesky board of directors. Could this position be affecting his judgment on this specifically? because usually I expect Techdirt and Mike himself to be much more reasonable.

  • zerofk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    edit-2
    19 hours ago

    This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

    Bull fucking shit. This is not about platforms being held responsible for user content. This is about adding points and badges and achievements and all kinds of things designed to reward engagement with dopamine.

    The author’s example of all content being drying paint would absolutely be addictive if the platform added an achievement for watching 10 different colours. Or: Congratulations, you’ve watched paint dry for 100 hours! As a reward, you get a new fancy emote! THAT is what these platforms do, and that is what is addictive. And that is what they’ve been convicted for.

    Is not a loophole to get around section 230 as the author claims.

    • nickiwest@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      Let’s not forget the years of literal psychological experiments that Meta conducted on its users to find out exactly what factors led to higher engagement.

      This isn’t a simple message board. This is a highly-engineered, personalized content delivery system with the goal of serving as many ads as possible.

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    19 hours ago

    Normally, I am all for Techdirt’s takes. But I think this one is off the mark a bit, because I legitimately think that infinite scroll and auto play are insidious, and actually harmful enough to be treated as a dangerous design decision.

    The whole point of Section 230 is that communications companies can’t be held responsible for harmful things that people transmit on their networks, because it’s the people transmitting those harmful things that are actually at fault. And that would be reasonable in the initial stages of the Internet, when people posted on bulletin boards (or even early social media) and the harmful content had a much smaller reach. People had to “opt in”, essentially, to be exposed to this content, and if they stumble on something they find objectionable they can easily change their focus

    But the purpose of the infinite scroll and auto play is to get people hooked on content. The algorithms exist to maximize engagement, regardless of the value of that engagement. I think the comparison to cigarettes is particularly apt. They are looking to hook people into actively harmful behaviors, for profit. And the algorithms don’t really differentiate between good engagement and harmful engagement. Anything that attracts the users attention is fair game.

    The author’s points regarding how these rulings can be abused are correct, but that doesn’t negate how fundamentally harmful these addictive practices are. It will be up to lawmakers to make sure that the laws are drafted in such a way that they can be applied equitably… (So maybe we’re screwed after all…)

  • Corkyskog@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    3 hours ago

    The author reads like he doesn’t understand context or the legal idea of a rational actor. What users are going to purposefully upload boring content?

    Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

    Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

  • BoofStroke@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    19 hours ago

    “For the children” tech laws should all be abolished. Why should I be burdened because you can’t be bothered to raise your own damned kids properly?

    • dogslayeggs@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      9
      ·
      18 hours ago

      You’re right, because kids have been shown to listen to the parents all the time and have never had problems handling adult situations when their parents aren’t around 100% of the time. Even amazing parents raise kids who do stupid shit. And once these amazing parents aren’t around their kid 100% of the time, those kids are still kids and will make bad decisions. This is especially true when it is something that literally every person around them is doing (adults, kids, friends, celebrities).

      • bootstrap@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        18 hours ago

        We all did dumb shit as kids, but tech wasnt anywhere near what it is now.

        These platforms need to be punished and held to account for the pervasive technology they have designed for profit, these things (FB, insta. Tiktok etc) shouldnt be able to exist in the first place in their current state. There were no guard rails put in place - just like the flood of AI, technology moves so much quicker than legislation can keep up and companys do really shitty things with that.

        I believe it starts at a parenting level, but it’s much more difficult to manage these days compared to 20 years ago. Age verification bullshit is not the answer but parents need to be given some form of help againt these fuckers and their incredibly easy to access addiction machines.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          Kids should be banned from the platforms.

          But that requires the tools to do so. And then we are back at checking on ages and identities.

  • DFX4509B@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    18 hours ago

    This is probably an extreme take, but kids shouldn’t be anywhere near a tablet while they’re still really young especially.

    • can_you_change_your_username@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      It kind of a tough balance. Yes, unrestricted tech use is an issue for young children but on the other hand using tech while young is the best way to make it a natural part of your experience of the world and tech isn’t going away. If you go to the other extreme and say no whatever sort of tech period before a certain age are you setting the kid back against more tech literate peers? There’s also the consideration that’s been discussed around alcohol forever. By making it an “adult thing” and effectively a rite of passage to drink alcohol do you cause more problems and abuse in young adults than if it was always a part of their experience and the focus was on responsible use instead of total abstinence?