“The new device is built from arrays of resistive random-access memory (RRAM) cells… The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.”

Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    2 days ago

    Same here. I wait to see real life calculations done by such circuits. They won’t be able to e.g. do a simple float addition without losing/mangling a bunch of digits.

    But maybe the analog precision is sufficient for AI, which is an imprecise matter from the start.

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 hours ago

      You don’t need to simulate float addition. You can sum two voltages by just connecting two wires - and that’s real number addition

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I know. My point was that this is horribly imprecise, even if their circuits are exceptionally good.

        There is a reason why all other chips run digital…

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          How is it imprecise? It’s the same thing as taking two containers of water and pouring them into a third one. It will contain the sum of the precious two exactly. Or if you use gears to simulate orbits. Rounding errors are a digital thing.

          Analog has its own set of issues (e.g. noise, losses, repeatability), but precision is not one of them. Arguably, the main reason digital took over is because it’s programmable and it’s good for general computing. Turing completeness means you can do anything if you throw enough memory and time at it, while analog circuits are purpose-made

      • Limonene@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 day ago

        The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.

        In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren’t solving anything that can’t be done faster by digitally simulating an analog computer.

          • turmacar@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 day ago

            Every operation your computer does. From displaying images on a screen to securely connecting to your bank.

            It’s an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.

          • Limonene@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 day ago

            They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There’s no practical purpose for them now.

            In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.

            If you’re asking about the 24-bit ADC, I think that’s usually high-end audio recording.

      • Treczoks@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        1 day ago

        No, it wouldn’t. Because you cannot make it reproduceable on that scale.

        Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 hours ago

          Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you’re using to measure it. In a microphone-amp-speaker chain there are no bits, only waves. It’s when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/“real” signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.

          • Treczoks@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            The problem is that both the generation as well as the sampling is imprecise. So there are losses at every conversion from the digital to the analog domain. On top of that are the analog losses through the on chip circuits themselves.

            All in all this might be sufficient for some LLMs, but they are worthless junk producers anyway, so imprecision does not matter that much.

            • floquant@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 hours ago

              Not in a completely analog system, because there’s no conversion between the analog and digital domains. Sure, a big advantage of digital is that it’s much much less sensitive to signal degradation.

              What you’re referring to as “analog audio hardware” seems to be just digital audio hardware, which will always have analog components because that’s what sound is. But again, amplifiers, microphones, analog mixers, speakers, etc have no bit depth or sampling rate. They have gains, resistances, SNR and power ratings that digital doesn’t have, which of course pose their own challenges