• ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    AI can do the heavy lifting, but must not be treated as an infallable machine that can do no wrong unless it absolutely malfunctions, otherwise we get yet another YouTube, Twitch, etc.

  • arotrios@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    23 hours ago

    Well, Reddit’s approach towards AI and auto-mod has already killed most of the interesting discussion on that site. It’s one of the reason I moved to the Fediverse.

    At the same time, I was around in the Fediverse during the CSAM attacks, and I’ve run online discussion sites and forums, so I’m well aware of the challenges of moderation, especially given the wave of AI chat-bots and spam constantly attempting to infiltrate open discussion sites.

    And I’ve worked with AI a great deal (go check out Jan - open source, runs on local machine if you’re interested), and there’s no chance in hell it’s anywhere near ready to take on the role of moderator.

    See, Reddit’s biggest strength is its biggest weakness = the army of unpaid mods that have committed untold numbers of hours towards improving the site’s content. What Reddit found out during the API debacle was that because the mods weren’t paid, Reddit had no recourse to control them aside from “firing” them. The net result was a massive loss of editorial talent, and the site’s content quality plunged as a result.

    Because although the role of a mod is different in that they can’t (or shouldn’t) edit user content, they are still gatekeepers the way junior editors would be in a print publishing organization.

    But here’s the thing - there’s a reason you pay editors. Because they ensure the content of the organization is of high caliber, which is why advertisers want to pay you to run their ads.

    Reddit thinks it can skip this step. Instead of doing the obvious thing = pay the mods to be professionals - they think that they can solve the problem with AI much more cheaply. But AI won’t do anything to encourage people to post.

    What encourages people to post is that other people will see and comment, that real humans will engage with their content. All it takes is the automod telling you a few times that your comment was banned for X inexplicable reason and you stop wanting to post. After all, why waste your time creating unpaid content for a machine to reject it?

    If Reddit goes the way of AI moderation, they’ll need to start paying their content creators. If they want to use unpaid content from an open discussion forum, they need to start paying their moderators.

    But here’s the thing. Reddit CAN’T pay. They’ve been surfing off of VC investment for two decades and have NEVER turned a profit, because despite their dominance of the space, they kept trying to monetize it without paying people for contributing to it… and honestly, they’ve done a piss poor job at every point in their development since “New Reddit” came online.

    This is why they sold your data to Google for AI. And its why their content has gone to crap, and why you’re all reading this on the Fediverse.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      The mods are totally complicit though, at least for some of the subs, and the Ai had a hand in the massive ban wave that’s been going on currently. It went looking out for accts you may or may not have violated any terms and banned them regardless. They actually increased their automod filtering for their subs

  • Jakeroxs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    23 hours ago

    I think using LLMs to HELP with moderation makes sense. The problem with all these companies is they appear to think it’ll be perfect and lay off all the humans.

      • Pyr_Pressure@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I mean, what people refer to as AI today isn’t really synonymous with actual AI

        It’s been cheapened

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I don’t think it’s that. LLM’s very much are actual AI. Most people just take that term to mean something more than that when it actually doesn’t. A simple chess engine is an AI as well.

    • Obelix@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      Yeah, LLMs could really help. Other tools without AI are also helpful. The problem with all those companies is that they don’t want to do moderating for the public good at all. Reddit could kill a lot of Fake News on it’s platform, prevent reposts of revenge porn or kick idiots just by implementing a few rules. They don’t want to

  • Baggie@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    23 hours ago

    Great idea dipshit, who’s gonna foot the power bill, you?

    • shades@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      41
      ·
      2 days ago

      <letsUsersPreventFreedomOfScreechFromHittingTheirOwnFeed>

      “Cool. I think he should piss on the 3rd rail.”

      ¿What the hell? It’s right there in the title, letting users OPT INTO IT as in not in the company’s discretion forced upon everyone but allowing the user to set their tolerance levels. As long as it can be set to 0 why’s this a bad thing?

      Forcing moderation ONTO everyone is vehemently opposed.

      ¿Why the fuck would anyone want to prevent an AI from filtering out nazi/csam from their own feeds?

      He’s thought of a clever way to offload the responsibility/burden of the platform/service allowing speech on it. It allows people who don’t want to see triggering content to not see it, without having to involve some third party that gets PTSD from filtering out all the vile shit humanity has to offer.

      • Fredthefishlord@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        39
        arrow-down
        2
        ·
        2 days ago

        … that’s not moderation then dipshit. Blocking things from your personal feed is what we call a FILTER. It’s not moderation.

      • rockSlayer@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        2 days ago

        Except the AI will still need to be trained on data, which requires the very labor you believe will be eliminated.

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    73
    ·
    2 days ago

    Why don’t we get AI to moderate Alexis. He stopped being relevant 10 years ago.

  • regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    2 days ago

    Only if the company using the AI is held accountable for what it does/doesn’t moderate

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      thier aggressive autoban is getting everyone, regardless if you did actually ban evade or not, though not in large numbers.