• Jesus_666@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    That does make sense when you need absolute precision like when doing abstract math. Otherwise you can just use whichever unit and number of significant digits you need and be precise to that amount. That’s what you do with imperial/American customary units as well; a 5/32" screw isn’t going to be manufactured to the precision of a Planck length; manufacturers specify their sizes to three significant digits of an inch.

    Let’s say you have a machining project and your tools are precise to 0.1 mm. So you plan things out at a precision of 0.1 mm. It doesn’t matter that a distance is 17/38 cm exactly. It doesn’t matter that it’s 4.473684210526315789… mm. You can’t set the tool to anything better than 4.5 mm anyway.

    Also note that the metric system doesn’t prevent you from using fractions. You’re perfectly free to work with fractions where useful. That’s just not how people talk about lengths because those fractions have no meaning outside your specific use case.

    • chiliedogg@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      32 minutes ago

      But that 5/32 screw has its precision built into the measurement. Sig figs and error ranges aren’t required for fractional, because both are built into the denominator.

      If your 5/32 measurement is super precise you can record it as 160/1024ths, because the denominator has “+/- 1/2048” built into the measurement.