PDF.

We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.

  • doug@lemmy.today
    link
    fedilink
    English
    arrow-up
    61
    ·
    edit-2
    21 hours ago

    I think it was a Reddit scraper years ago that taught me that I should probably lie more often on the internet about my work, friends, family details, etc.

    Just like, little lies that don’t really matter in the comment, but would misdirect an AI or investigator into things that aren’t true.

    It’s just so much woooooork to think about this shit. And to come up with different screen names everywhere? And to like, sub to a city I don’t live in and comment there about shit I know nothing about? Exhausting.

    Thankfully my brothers and three uncles are here to support me. And my alligator.

    • stickly@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      42 minutes ago

      The solution is simple, just launder each comment through an LLM to fudge the style and details a bit

      Edit, tried it for fun:

      lowkey just run every comment through an llm and let it switch up the words and details a bit so it dosnt sound like you wrote it

    • Insekticus@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      20 hours ago

      Yeah exactly, like if youre 25, say youre 27. Then in another post 24. Youre still around that age, but the exact age is muddied in the waters.

      You can also use Americanized spelling in some sentences and or if you’re American, use British English, and become Unamericanised. Say you’re a half-Brit half-American dual citizen even though you’re from South Africa or something.

      • MountingSuspicion@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 hours ago

        I feel like that may be worse. Kind of like how if you have certain security measures while browsing the web it’s almost easier to fingerprint you. It’ll get a good idea of your age and that’ll be enough rather than sticking to a specific lie. Just always be 3 years older with one additional sibling or a sibling of the opposite sex. If the sex of your sibling is relevant just describe them as a close family friend or close cousin in that instance. I can’t say for sure, but if I had to guess having a static lie is maybe more obfuscation than a variable one. Though even posting on this thread is bad opsec.