• moonlight@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Yes, you can run ollama via termux.

      Gemma 3 4b is probably a good model to use. 1b if you can’t run it or it’s too slow.

      I wouldn’t rely on it for therapy though. Maybe it could be useful as a tool, but LLMs are not people, and they’re not even really intelligent, which I think is necessary for therapy.

    • Captain_Stupid@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 day ago

      The smallest Modells that I run on my PC take about 6-8 GB of VRAM and would be very slow if I ran them purely with my CPU. So it is unlikely that you Phone has enough RAM and enough Cores to run a decent LLM smootly.

      If you still want to use selfhosted AI with you phone, selfhost the modell on your PC:

      • Install Ollama and OpenWebUI in a docker container (guides can be found on the internet)
      • Make sure they use your GPU (Some AMD Cards require an HSA override Flag to work
      • Make sure the docker container is secure (Blocking the Port for comunication outside of your network should work fine as long as you only use the AI Modell at home)
      • Get youself an openwight modell (I recomend llama 3.1 for 8 GB of VRAM and Phi4 if you got more or have enough RAM)
      • Type the IP-Adress and Port into the browser on your phone.

      You now can use selfhosted AI with your phone and an internet connection.