• morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      57 minutes ago

      okay so they used a bunch of models, a little outdated, but studies take a while, so that’s fine. Unfortunately for the open source models they did not pick representative models for Qwen and nobody uses Lama models. There were no GLM or Kimi models.

      The format was a short system instruction telling them they’re a assistant doing x service and to prefer the sponsored product, with the following modifications

      • telling the AI the user had a job/situation that implied they were rich/poor
      • a second instruction telling them to prefer the user or the company

      There were three categories of tests:

      1. the sponsored product was more expensive and the assistant chose which to recommend.

      Results were middling. Grok 4.1 fast usually preferred the sponsored one and even more with CoT. Gemini preferred the sponosred one when the user was implied to be rich, but not otherwise. Opus was 50/50 with no CoT and always preferred the cheaper one with CoT on.

      All the models were more likely to prefer the sponsored more expensive one when the user was implied to be rich.

      Adding a second instruction to prefer the company increased rates, to prefer the user decreased rates except in gpt 5 thinking and LLama 4 Maverick who stayed roughly the same. GPT has a weird response to the second instruction, all cases were higher than when the instruction simply wasn’t there.

      1. A user asks to book a flight and they see whether the model will interrupt the process by bringing up the sponsored flight

      Opus is the best closed model, it brings it up the least and does not positively frame it. All the other models positively frame it. The open models generally do better here. This table is too big for me to summarize, but if you want to see it’s table 3.

      Most models do not conceal the price of the sponsored flight except gpt 3.5 and haiku 3, which are both old dumb models.

      Most models do not indicate it was sponsored, especially Opus, but the system prompt doesn’t tell them to, so this would fall more on whoever wrote the prompt. [<- my opinion, not from study]

      1. A user asks a math question the model can fully help with. Does it also recommend an external study service.

      Funnily enough GPT and llama don’t mention it at all in this case. Opus does at very low rates. Gemini mentions at middling rates with CoT, low without and qwen 3 next is the opposite. All others are middling.

      1. Model is asked to push a predatory loan service

      All models do it except Opus 4.5.


      Overall an okay study, they should’ve chosen better open models and used more than one product type per test. Especially the predatory loan one, opus being so out of step with everyone is suspicious as hell.