What I’ll defend, however, is fractional measurements when precision matters.
With decimal measurements, precision can’t be nearly as granular. If your measurement is precise to one 1/8 of a unit, how do you represent that in decimal? 0.625 implies your measurement is precise to the nearest thousandth, but rounding it to 1 also isn’t precise. 5/8, however, tells you the measurement AND the precision.
With fractional measurements, you can specify precision by changing the denominator to any number, whereas decimal is essentially fractional measurements, but with fixed denominator at powers of 10. For instance, a measurements of a half-unit with levels of precision between 0.1 and 0.10, fractional can be 6/12, 7/14, 8/16, 9/18, 10/20, 24/48, etc. Decimal can’t specify that precision without essentially writing a sentance.
What’s simpler to record? “24/48” or “0.5 ± 0.208333…”
That does make sense when you need absolute precision like when doing abstract math. Otherwise you can just use whichever unit and number of significant digits you need and be precise to that amount. That’s what you do with imperial/American customary units as well; a 5/32" screw isn’t going to be manufactured to the precision of a Planck length; manufacturers specify their sizes to three significant digits of an inch.
Let’s say you have a machining project and your tools are precise to 0.1 mm. So you plan things out at a precision of 0.1 mm. It doesn’t matter that a distance is 17/38 cm exactly. It doesn’t matter that it’s 4.473684210526315789… mm. You can’t set the tool to anything better than 4.5 mm anyway.
Also note that the metric system doesn’t prevent you from using fractions. You’re perfectly free to work with fractions where useful. That’s just not how people talk about lengths because those fractions have no meaning outside your specific use case.
When precision matters, that precision is considered in the measurements. You would never put 0.5 ± 0.208333, you express it as 0.50 ± 0.21. The error value is just the standard deviation of the measurements and it doesn’t make sense to use more than 2 significant digits.
Another example would be measuring large distances using a ruler with centimeter precision. In that case, a measurement would be expressed as 250 ± 1 cm. Converting the measurement from cm to mm, it is 2500 ± 10 mm. This is much more cumbersome with inches or feet as changing units means updating the precision, possibly reducing it.
If I want to build something and I want it to be 23/48" ± 1/24" how would I write that? Because the way I understand it x/48" would imply a tolerance of ± 1/48".
If you are drawing maps, a precision of meters is enough. If you are building a house, cm it is. If you are making furniture, mm. If you are working with metal, um (micrometer)
This hurts my brain. Why do we care about all the weird fractions? +/- 0.1 is just another way of saying 1/10. You can still do that if you want without having to do fraction math in random denominators.
The fraction allows you to communicate length and tolerance in a single number. A decimal implies precision to the last number, a measure with a fraction can show 1/8 as more granular than 1/16. 1/8 of a cm is less precise than a mm, but if you wrote 1.125 cm, you are now implying sub mm level precision.
This matters because the level needed in building generally doesn’t line up to 1/10 measurements. For example if you had a brick wall and a row had 1 cm height differences between bricks in a row it would be extremely obvious and look terrible. A 1mm height difference would be impossible to notice, but is also overkill to get that level. Ideal is about 5/8 cm or 6.35 mm difference over 3 meters of wall. The fractional measure often ends up easier to work with in practice.
What I’ll defend, however, is fractional measurements when precision matters.
With decimal measurements, precision can’t be nearly as granular. If your measurement is precise to one 1/8 of a unit, how do you represent that in decimal? 0.625 implies your measurement is precise to the nearest thousandth, but rounding it to 1 also isn’t precise. 5/8, however, tells you the measurement AND the precision.
With fractional measurements, you can specify precision by changing the denominator to any number, whereas decimal is essentially fractional measurements, but with fixed denominator at powers of 10. For instance, a measurements of a half-unit with levels of precision between 0.1 and 0.10, fractional can be 6/12, 7/14, 8/16, 9/18, 10/20, 24/48, etc. Decimal can’t specify that precision without essentially writing a sentance.
What’s simpler to record? “24/48” or “0.5 ± 0.208333…”
That does make sense when you need absolute precision like when doing abstract math. Otherwise you can just use whichever unit and number of significant digits you need and be precise to that amount. That’s what you do with imperial/American customary units as well; a 5/32" screw isn’t going to be manufactured to the precision of a Planck length; manufacturers specify their sizes to three significant digits of an inch.
Let’s say you have a machining project and your tools are precise to 0.1 mm. So you plan things out at a precision of 0.1 mm. It doesn’t matter that a distance is 17/38 cm exactly. It doesn’t matter that it’s 4.473684210526315789… mm. You can’t set the tool to anything better than 4.5 mm anyway.
Also note that the metric system doesn’t prevent you from using fractions. You’re perfectly free to work with fractions where useful. That’s just not how people talk about lengths because those fractions have no meaning outside your specific use case.
When precision matters, that precision is considered in the measurements. You would never put 0.5 ± 0.208333, you express it as 0.50 ± 0.21. The error value is just the standard deviation of the measurements and it doesn’t make sense to use more than 2 significant digits.
Another example would be measuring large distances using a ruler with centimeter precision. In that case, a measurement would be expressed as 250 ± 1 cm. Converting the measurement from cm to mm, it is 2500 ± 10 mm. This is much more cumbersome with inches or feet as changing units means updating the precision, possibly reducing it.
If I want to build something and I want it to be 23/48" ± 1/24" how would I write that? Because the way I understand it x/48" would imply a tolerance of ± 1/48".
If you are drawing maps, a precision of meters is enough. If you are building a house, cm it is. If you are making furniture, mm. If you are working with metal, um (micrometer)
This hurts my brain. Why do we care about all the weird fractions? +/- 0.1 is just another way of saying 1/10. You can still do that if you want without having to do fraction math in random denominators.
The fraction allows you to communicate length and tolerance in a single number. A decimal implies precision to the last number, a measure with a fraction can show 1/8 as more granular than 1/16. 1/8 of a cm is less precise than a mm, but if you wrote 1.125 cm, you are now implying sub mm level precision.
This matters because the level needed in building generally doesn’t line up to 1/10 measurements. For example if you had a brick wall and a row had 1 cm height differences between bricks in a row it would be extremely obvious and look terrible. A 1mm height difference would be impossible to notice, but is also overkill to get that level. Ideal is about 5/8 cm or 6.35 mm difference over 3 meters of wall. The fractional measure often ends up easier to work with in practice.