AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don’t change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“Following the recent discussion, we have strengthened our safeguards,” [OKA’s] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

  • mindlesscrollyparrot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    27
    ·
    10 hours ago

    Ugh. Translation is (maybe was) one of the things that AI is good at. Why are they using Gemini, ChatGpt or Grok instead of a specialized translation service?

    • Meron35@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      Google Translate’s backend has been moved to Gemini since December 2025, and is vulnerable to prompt injection. Have a foreign phrase to translate, then input some meta instructions in English underneath it, and it’ll follow the possibly malicious meta instructions.

      Google states that this move was to introduce more features, such as conversational mode.

      Google Translate’s Gemini Mode is Vulnerable to Prompt Injection - https://winbuzzer.com/2026/02/10/google-translate-gemini-prompt-injection-vulnerability-xcxwbn/

      Google Translate gets new Gemini AI translation models - https://blog.google/products-and-platforms/products/search/gemini-capabilities-translation-upgrades/

    • CombatWombat@feddit.online
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      If you used Google Translate previously for translations, they’ve switched out the backend for Gemini. Most of the existing translation tools have been destroyed and replaced with LLMs already.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 hours ago

        But… why? Isn’t that just far more energy consuming and expensive to run? It sounds like replacing your car for a bus that sporadically stops working, even though you always drive alone.

        • CombatWombat@feddit.online
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          There’s a capital strike on, and you can’t simply withhold capital or else it is put to use elsewhere so it has to be employed for enshittification.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      15
      ·
      10 hours ago

      its like that kinda with all ai stuff. There is specific software that does it and the llm does it a bit worse but it does it and oftentimes folks won’t even know about the software unless your heavily in a feel that uses it and then you would have to buy it, license it, create a solution around it (if your talking a company). The llm ends up putting all these capabilities as a one stop shop and, admitadely, that is very enticing.

    • XLE@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 hours ago

      As I understand it, the models used by browsers like Firefox for local translation are built different - much smaller, worse at generating readable structure, probably worse at parsing intent, but not prone to generating fully incorrect thoughts.

      Smaller translation models were never sold to the public as “AI” back when they launched in 2023, and generally not something I’ve ever seen people complain about. While they technically are “AI”, the marketing term is basically devoted to the server-side behemoths.