• frezik@midwest.social
      link
      fedilink
      arrow-up
      58
      ·
      1 year ago

      In a database course I took, the teacher told a story about a company that would take three days to insert a single order. Thing was, they were the sort of company that took in one or two orders every year. When it’s your whole revenue on the line, you want to make sure everything is correct. The relations in that database were checked to hell and back, and they didn’t care if it took a week.

      Though that would have been in the 90s, so it’d go a lot faster now.

        • frezik@midwest.social
          link
          fedilink
          arrow-up
          30
          ·
          1 year ago

          No idea, but I imagine it was something big like that, yes. I think it was in northern Wisconsin, so laker ships are a good guess.

        • Treczoks@feddit.uk
          link
          fedilink
          arrow-up
          18
          ·
          1 year ago

          We have a company like that here somewhere. When they have one job a year, they have to reduce hours, if they have two, they are doing OK, and if they have three, they have to work overtime like mad. Don’t ask me what they are selling, though. It is big, runs on tracks, and fixes roads.

    • DigitalPaperTrail@kbin.social
      link
      fedilink
      arrow-up
      22
      ·
      1 year ago

      what I’m hearing is we have RAM-as-a-service to look forward to in the future, after internet speed and reliability get good enough

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 year ago

        It’ll never be fast enough. An SSD is orders of magnitude slower than RAM, which is orders of magnitude slower than cache. Internet speed is orders of magnitude slower than the slowest of hard drives, which is still way too slow to be used for anything that needs memory relatively soon.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          A SATA SSD has ballpark 500MB/s, a 10g ethernet link 1250MB/s. Which means that it can indeed be faster to swap to the RAM of another box on the LAN that to your local SSD.

          A Crucial P5 has a bit over 3GB/s but then there’s 25g ethernet. Let’s not speak of 400g direct attach.

          • DaPorkchop_@lemmy.ml
            link
            fedilink
            arrow-up
            7
            ·
            1 year ago
            • modern NVMe SSDs have much more bandwidth than that, on the order of > 3GiB/s.
            • even an antique SATA SSD from 2009 will probably have much lower access latency than sending commands to a remote device over an ethernet link and waiting for a response
            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              Show me an SSD with 50GB/s, it’d need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.

              Speaking of: You can swap to VRAM.

              • DaPorkchop_@lemmy.ml
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.

                • barsoap@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Maybe there’s something I’m missing here

                  SFP direct attach, you don’t need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with “We have this dual socket EPYC system here with full 12TB memory and need to double that”. You have *rolls dice* 104 free PCIe5 lanes, go.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            Bandwidth isn’t really most of the issue. It’s latency. It’s the amount of time from the CPU requesting a segment of memory to receiving it, which bandwidth doesn’t effect.

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Depends on your workload and access pattern.

              …I’m saying can be faster. Not is faster.

              • Cethin@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Yeah, but the point of RAM is fast random (the R in RAM) access times. There are ways to make slower memory work better for this by predicting what will be needed (grab a chunk of memory because accesses will probably need things with closer locality than pure random), but it can’t be fixed. Cloud memory is good for non-random storage or storage that isn’t time critical.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      It will crash as soon as it needs to touch the swap due to the relatively insane latency difference.

    • Bloody Harry@feddit.de
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      wait, didn’t some tech youtubers like LTT try using cloud storage as swap/RAM? afaik they failed because of latency

      • kevincox@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Obviously you should set up device mapper to encrypt the gdrive device then put the swap on the encrypted mapper device.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          1 year ago

          If your kernel isn’t using 90% of your CPU resources, are you really even using it to it’s full potential? /s

  • Valen@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    You really need to index your tables. This has all the hallways of a Cartesian cross product.

  • UFO@programming.dev
    link
    fedilink
    arrow-up
    26
    ·
    1 year ago

    I dunno why I didn’t realize you can add more swap to a system while running. Nice trick for a dire emergency.

  • dan@upvote.au
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    Hopefully that swap is on an SSD, otherwise that query may not ever finish lol
    Once you’re deep into swap, things can get so slow that there’s no recovering from it.

  • Faresh@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Does the OOM killer actually work for anyone? In every linux system I’ve used, if I run out of memory, the system simply freezes.

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Absolutely can and will take action. Doesn’t always kill the right process (sometimes it kills big database engines for the crime of existing), but usually gives me enough headroom to SSH back in and fix it myself.

      • JokeDeity@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I have limited experience with Linux, but why is it that when my system locks up, SSH still tends to work and let me fix things remotely? Like, if the system isn’t locked up, let me fix it right here and now and give me back control, if it is locked up, how is SSH working to help me?

        • computergeek125@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          So that’s the nifty thing about Unix is that stuff like this works- when you say “locked up”, I’m assuming you refer to logging in to a graphical environment, like Gnome, KDE, XFCE, etc. To an extent, this can even apply to some heavy server processes: just replace most of the references to graphical with application access.

          Even lightweight graphical environments can take a decent amount of muscle to run, or else they lag. Plus even at a low level, they have to constantly redraw the cursor as you move it around the screen.

          SSH and plain terminals (Ctrl-Alt-F#, what number is which varies by distro) take almost no resources to run: SSH/Getty (which are already running), a quick process call to the password system, then a shell like bash or zsh. A singular GUI application may take more standing RAM at idle than this entire stack. Also, if you’re out of disk space, the graphical stack may not be able to alive

          So when you’re limited on resources, be it either by low spec system or a resource exhaustion issue, it takes almost no overhead to have an extra shell running. So it can squeeze into a tiny corner of what’s leftover on your resource-starved computer.

          Additionally, from a user experience perspective, if you press a key and it takes a beat to show up, it doesn’t feel as bad as if it had taken the same beat for your cursor redraw to occur (which also burns extra CPU cycles you may not be able to spare)

    • Fedora@lemmy.haigner.me
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Yes, it takes surprisingly long for the OOM killer to take action, but the system unfreezes. Just wait a few minutes and see whether that does the trick.

    • Turun@feddit.de
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Yes. If you have swap the system will crawl to a halt before the process is killed though, SSDs are like a thousand times slower than RAM. Swapoff and allocate a ton of memory to see it in action.

      • sheogorath@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Nvme PCIe 4 SSDs are quite fast now tho, you can get between DDR1 and DDR2 speeds from a modern SSDs. This is why Apple are using their SSDs as swap quite aggressively. I’m using a MacBook Pro with 16 GBs of RAM and my swap usage regularly goes past 20 GBs and I didn’t experience any slowdown during work.

        • Turun@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Depends if the allocated memory is actively used or not. Some apps do not require a large amount of random access memory, and are totally fine with a small part of random access memory and a large part of not so random access and not so often used memory.

          Alternatively I can imagine that MacOS simply has a damn good algorithm to determine what can be moved to swap and what cannot be moved to swap. They may also be using the SSD in SLC mode so that could contribute to the speedup as well.

    • TauZero@mander.xyz
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      It never kicks in for me when it should, but I figured out I can force trigger it manually with the magic SysRq key (Alt+SysRq+F, needs to be enabled first), which instantly recovers my system when it starts freezing from memory pressure.

      • drathvedro@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Alt+SysRq+F, needs to be enabled first

        Do note that this opens up a security hole. Since this can kill any app at random and is not interceptable, if you leave your PC in a public place, someone could come up and press this combo a few times. Chances are, it’ll kill whatever the locking app you’re using.

    • Devion@feddit.nl
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Yeah, default Ubuntu LTS webserver kicked the mysqld on a stupid query (but it worked on dev - all developers, someday) not too long ago…

    • jabjoe@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Oh yes. I’ve had massive compiles (well linking) which failed because of the OOM killer, and I did exactly the same, massive swap so it will just keep going. So what if it’s using disk as RAM and unusable for a few hours in the middle of the night, at least it finishes!

    • AggressivelyPassive@feddit.de
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Wrote my master thesis this way - didn’t have enough ram or knowledge, but plenty of time on the lab machine, so I let it do its thing over night.

      Sorry, lab machine ssd.

  • tiredofsametab@kbin.run
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I don’t want to see the EXPLAIN for that query. This person really needs to learn more about sql, I’d wager.