Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.

That’ll be all.

  • iturnedintoanewt@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 hours ago

    What’s your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

    • Showroom7561@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      Seemed to be the most popular. LOL The smart search job hasn’t been running for long, so I’ll check that other one out and see how it compares. If it looks better, I can easily use that.