• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    Basically, the idea is to use a symbolic logic engine within a dynamic context created by the LLM. Traditionally, the problem with symbolic AI has been with creating the ontologies. You obviously can’t have a comprehensive ontology of the world because it’s inherently context dependent, and you have an infinite number of ways you can contextualize things. What neurosymbolics does is use LLMs for what they are good at, which is classifying noisy data from the outside world, and building a dynamic context. Once that’s done, it’s perfectly possible to use a logic engine to solve problems within that context.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      So for peasants running Chairman Xi’s LLMs on local GPUs, we could try the largest model we can run and have it generate scripts to run instead of having the model do the actual processing of bulk data, to get more out of it.