• LifeInMultipleChoice@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    22 hours ago

    Wasn’t there an article posted yesterday about a group trying to create a biological computer that was living cells do to their efficiency of use on less power? (They are far from close, they basically took skin cells, ionized them, and had no idea how they were going to get them to stay alive long term yet.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      21 hours ago

      Even that won’t be anywhere close to the efficiency of neurons.

      And actual neurons are not comparable to transistors at all. For starters the behaviour is completely different, closer to more complex logic gates built from transistors, and they’re multi-pathway, AND don’t behave as binary as transistors do.

      Which is why AI technology needs so much power. We’re basically virtualising a badly understood version of our own brains. Think of it like, say, PlayStation 4 emulation - it’s kinda working but most details are unknown and therefore don’t work well, or at best have a “close enough” approximaion of behaviour, at the cost of more resource usage. And virtualisation will always be costly.

      Or, I guess, a better example would be one of the many currently trending translation layers (e.g. SteamOS’s Proton or macOS’ Rosetta or whatever Microsoft was cooking for Windows for the same purpose, but also kinda FEX and Box86/Box64), versus virtual machines. The latter being an approximation of how AI relates to our brains (and by AI here I mean neural network based AI applications, not just LLMs).

      • applebusch@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        There’s already been some work on direct neural network creation to bypass the whole virtualization issue. Some people are working on basically an analog FPGA style silicon based neural network component you can just put in a SOM and integrate into existing PCB electronics. Rather than being traditional logic gates they directly implement the neural network functions in analog, making them much faster and more efficient. I forget what the technology is called but things like that seem like the future to me.

        • fonix232@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          8 hours ago

          I’m very much aware of FPGA-style attempts, however I do feel the need to point out that FPGAs (and FPGA style computing) is even more hardware-strained than emulation.

          For example, current mainstream emulation FPGA DE10 Nano has up to 110k LE/LUT, and that gets you just barely passable PS1 emulation (primarily, it’s great for GBA emu, and mid to late 80s, early 90s game console hardware emulation). In fact it’s not even as performant as GBA emulation on ARM - it uses more power, costs more, and the only benefit is true to OG hardware execution (which isn’t always true for emulation).

          Simply said, while FPGAs provide versatility, they’re also much less performant than similarly priced SoCs with emulation of the specific architecture.