• deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    82
    ·
    1 day ago

    “The design feature changes the state is seeking include “enacting effective age verification, removing predators from the platform, and protecting minors from encrypted communications that shield bad actors”.”

    Oh fuck right off.

    I’m sorry but this is a bad “think of the children” decision. There are limits to what Meta or any platform can do about bad actors at that size without structural changes.

    What might actually help: only show people content from groups and people that they follow, preferably in chronological order, rather than suggesting new groups and pages algorithmically all the time and thereby increasing the likelihood of children interacting with strangers on the Internet.

    And improve parental controls for children’s accounts. I’m sure there’s nothing currently giving a “parent” account high level control over a “child” account, but I’m happy to be corrected if I’m wrong.

    But also: require intercompatibility with other platforms and a standardized form of profile data export so people can leave Facebook but stay in touch with the people who still use it.

    • lmmarsano@group.lt
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      edit-2
      21 hours ago

      And improve parental controls for children’s accounts. I’m sure there’s nothing currently giving a “parent” account high level control over a “child” account, but I’m happy to be corrected if I’m wrong.

      Parental controls already exist in every major OS, they suffice to restrict & monitor social media, and they go unused.

      A better solution might be for laws to provide parents resources & incentives to parent children’s online activity (including training to use resources they already have) & to provide children education in online safety & literacy. Decades ago, federal courts citing commission findings & studies recommended these alternatives as superior in effectiveness, meeting government duties to minimize impact on civil liberties, allocation of law enforcement resources, etc. For the permanent injunction to COPA, the judge wrote

      Moreover, defendant contends that: (1) filters currently exist and, thus, cannot be considered a less restrictive alternative to COPA; and that (2) the private use of filters cannot be deemed a less restrictive alternative to COPA because it is not an alternative which the government can implement. These contentions have been squarely rejected by the Supreme Court in ruling upon the efficacy of the 1999 preliminary injunction by this court. The Supreme Court wrote:

      Congress undoubtedly may act to encourage the use of filters. We have held that Congress can give strong incentives to schools and libraries to use them. It could also take steps to promote their development by industry, and their use by parents. It is incorrect, for that reason, to say that filters are part of the current regulatory status quo. The need for parental cooperation does not automatically disqualify a proposed less restrictive alternative. In enacting COPA, Congress said its goal was to prevent the “widespread availability of the Internet” from providing “opportunities for minors to access materials through the World Wide Web in a manner that can frustrate parental supervision or control.” COPA presumes that parents lack the ability, not the will, to monitor what their children see. By enacting programs to promote use of filtering software, Congress could give parents that ability without subjecting protected speech to severe penalties.

      I also agree and conclude that in conjunction with the private use of filters, the government may promote and support their use by, for example, providing further education and training programs to parents and caregivers, giving incentives or mandates to ISP’s to provide filters to their subscribers, directing the developers of computer operating systems to provide filters and parental controls as a part of their products (Microsoft’s new operating system, Vista, now provides such features, see Finding of Fact 91), subsidizing the purchase of filters for those who cannot afford them, and by performing further studies and recommendations regarding filters.

      Adult supervision, child education on online safety & literacy, parental controls & filters are more effective at less expense to fundamental rights. Governments know this & conveniently forget it.

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        OS level parental controls do not give a parent control over a child’s use of a social media platform, to the best of my knowledge. For example, how do you prevent a child from friending someone you don’t know on facebook, while still letting your child join a Facebook group for their soccer team? That kind of fine grain control needs to happen on the level of the platform. Universally blocking DMs to your child’s account from accounts they are not friended to needs to happen on the level of the platform. Etc.

        Universally preventing children from joining social media is also an option, or giving parents the tools to block their children individually from accessing known social media sites from hardware under the parent’s control is also an option, but neither of these are sufficient or without negative consequences. Blocking children from social media by law requires age verification to have any effect. Blocking access to certain websites on a hardware level encourages the child to use hardware outside their parents control, or else excludes them from a part of social life.

        Platforms need parental control tools as well, not just operating systems, and those tools need to be sufficient to allow a parent to have real control over what their child can access. I don’t think that will exist without legislation, because it is contrary to the platform’s financial interest.

        • lmmarsano@group.lt
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          OS level parental controls do not give a parent control over a child’s use of a social media platform

          A quick web search indicates they can filter/block content, restrict apps, report activity. Additional software can monitor communication (including social media) and alert guardians.

          However, the legal opinion wasn’t that parental control software is the best solution or only better solution[1], but that more effective alternatives (such as non-punitive laws promoting use of client-side parental controls) with less adverse impact exist than punitive laws limited in their enforceability by jurisdiction & that unnecessarily burden & deter (thus harm) free exercise fundamental liberties.[2] Client-side parental controls only affect their users without affecting everyone else. Unlike regulations on site operators, they work on content originating outside a law’s jurisdiction. Even at the time of that federal court decision, parental controls could screen dynamic content (eg, live chats) over any protocol.

          By far, the most appropriate answer is responsible adult involvement & supervision and the education of children to address motivation, coping, & responsible behavior.

          The internet is global. A key problem with any coercive law is their jurisdiction isn’t: just as 4chan.org can tell UK’s OfCom to go fuck itself, site operators beyond a law’s jurisdiction can tell its enforcers the same. Another issue is the compliance burden is harder on entrants than the dominant companies in the industry with more resources to afford to comply, thus deterring competition. Do we really want to make it harder to displace our current social media companies with alternatives?

          Communication alone rarely poses immediate danger: there’s usually a number of steps between the communication & actual harm where anyone can intervene. We can block or ignore unwanted communication & choose the information we disclose. Responsible people can guide their children on safety & control their access to the devices they give them.

          A while ago, when my uncle struck his kid for making an unauthorized payment through the kid’s tablet, I scolded him for creating the situation where the kid could do that instead of setting up a child account with parental controls. When I asked him how child abuse is more responsible than reading some shit designed for him to understand and pressing a few buttons to use the system exactly as designed to prevent this shit from happening, he quickly got the point and did that in about an hour. This shit ain’t hard.

          Better solutions already exist, they’re effective, and the solid recommendations governments already have to promote them effectively would work. Governments have largely chosen not to.


          1. The cited recommendations I mentioned elsewhere went beyond parental control software into areas such as the promotion of standards & the development of better standards in the industry. ↩︎

          2. Rather than accept any law, government has a duty to minimize compromises of fundamental rights in meeting its “compelling interests”. When government fails to prove that a law is the least adverse to fundamental liberties among alternatives that are at least as effective, that law must be rejected. ↩︎

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      22 hours ago

      What actually might help: hold people who design these tools criminally liable. Everyone knows what they are doing but you can’t really say no to your employer because “don’t worry you’re not liable” so everyone continues on building the Torment Nexus.

      • deliriousdreams@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        17 hours ago

        Are you suggesting that we should be able to criminally prosecute people who build end to end encryption software and tools? Or algorithms that find people you may know? Because that seems to be key to the Meta lawsuit as far as they are involved. That and the fact that Meta deliberately mislead the public about the safety of the website for kids. Because social media as it exists today isn’t really safe for children and a best the people responsible for that are the executives who made the decision to lie accountable.

        But your average programmer isn’t designing tools for the purpose of making kids less safe. They aren’t designing tools for the purpose of being addictive. And they aren’t designing tools for predators. They happen to have designed tools used by predators because of the flaws in the design and the fact that their executives found those flaws to be advantageous to their bottom line so they played them up. Leaned in if you will.

        It was literally part of the leak in 2021 that they had discovered that their algorithm had certain effects and the C-Suit literally went about making sure they could use that for monetary gain to keep people on the site and scrolling. Not just young users, but users of all ages.

        The main thing is that it’s really easy to social engineer on a social media website where people are encouraged to give out all kinds of information that can be used against them in social engineering attacks. That, combined with the addiction fostered there and the encrypted chat methods owned by Meta and used by quite a bit of the world en masse is what created this situation.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 hours ago

          There’s difference between making an encryption tool and hiring top psychologists to design abusive systems.

          • deathbird@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            The “these tools” that the government is targeting include end-to-end encryption, and I am of course of the position that and end encryption is a good thing.

            When we talk about “abusive systems” we need to be very clear about what kinds of technology or system behaviors we are discussing, or else the government solution by default will usually be “well it can’t be only the platform that spies on you.”

          • deliriousdreams@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            11 hours ago

            Have you read the whistle blower’s book? Or even just the exerpts from it that have been floating around for ages?

            I’m curious, because it’s clear to me that the C-Suit c-suite at Meta and companies like it absolutely do employ some really shitty people, but at the same time, that doesn’t mean you can paint the janitor with the same brush as the lean in woman who made her personal assistant but lingerie and model it in her home for her. Or tried to force another woman to cuddle with her while she was pregnant.

            So what I’m saying is, I don’t agree with the sentiment that everyone who works there is a power mad executive intent on algorithmic domination of the internet, and for at least some of the programmers in question a job is a job.

            I will say that is different if they know what’s going on and have the proper ability to make the decision to fight against such a thing.

            But I question where your line of complicity starts and ends here.

            I guess I’m also pointing out that part of what makes meta properties particularly attractive to pedophiles is the same thing that makes it attractive to other online criminals and it’s the encryption.

    • RecallMadness@lemmy.nz
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      Unfortunately can’t codify how platforms work soecifically into law.

      But you could possibly explicitly make companies liable for promoting “detrimental” content. Then define “promoting” as something like “surfacing content to a user beyond the reach of the users immediate network. Ie algorithmic suggestions or advertising”

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        You can control a lot about how platforms specifically work with legislation. After decades of seeing how they function we are more than capable of accurately identifying what functionalities are deleterious to the well-being of people generally or children specifically, what should be under the control of the parents of children, and therefore we have a good idea of what we should legally require platforms to disable or otherwise put under the control parents.

        Again I propose as an example: having an account marked as a child’s account, with a designated parent account and making it so that if someone would attempt to add that child account as a friend or connected account on a social media network such an addition must be approved by the parent account.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      22 hours ago

      What might actually help: only show people content from groups and people that they follow, preferably in chronological order, rather than suggesting new groups and pages algorithmically all the time and thereby increasing the likelihood of children interacting with strangers on the Internet.

      You would simply have big groups like “I ❤️ New Mexico” where people will comment on the same posts and interact. If you would limit all the content including comments and likes to users someone personally follows without the ability to discover other users you would turn facebook basically into WhatsApp. It would definitely solve the issue but it would also make the platform look empty and kill it. Which would not necessarily be bad but sadly killing facebook is too radical for anyone to support.

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        That’s literally how Facebook used to function, and a big motivator as to why people joined it. People wanted to interact with people they knew.

        And this would not prevent people from making new connections.

        If you wanted to meet, I don’t know, hot singles in your area, you would actually have to talk to people who knew about the singles group on Facebook and have them share the link to the group with you. Or find it in the search bar if it’s public. You know, seek it out.

        Keep in mind that Facebook does not show you groups or people because it cares about the connections that you make. It just wants you to keep clicking. Your own desire to connect is more than sufficient to drive you to connection…with a search bar.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          That was then. Now is not 20 years ago. You can’t just take a platform like Facebook 20 years into the past and expect the business model to still function. It’s like saying that people used to buy newspapers so banning NYT from having a website is not a problem, they will just sell newspapers again.