No '3'. The difference between '300', '30', '3', 0.3' etc is just the power of 10 attached to it.
Having 'cut my teeth' on North Star BASIC's BCD floating point representation of numbers ( "1 + 2 = 3"), with all numbers, even integers, stored as 5-byte 8-digit BCD floating-point numbers internally. It used to drive me nuts to use Lawrence Livermore Laboratories' BASIC which used binary floating-point. You'd get things like "1 + 2 = 2.99999998" (or similar, I'm working from a 40-years ago memory here).
LATER UPDATE: Sorry, I omitted the words 'floating point'. No wonder you said 'huh?'. It should have read 'Binary floating point '3' is inexact'.
You're remembering wrong, unless they did something crazily non-standard. Whole numbers are represented perfectly in binary floating point. At least until you run out of digits, at which point they start rounding to even numbers, then to multiples of 4, etc.
There is one spot where binary floating point has trouble compared to decimal, and that's dividing by powers of 5 (or 10). If you divide by powers of 2 they both do well, and if you divide by any other number they both do badly. If you use whole numbers, they both do well.
Also even if you do want decimal, you don't want BCD. You want to use an encoding that stores 3 digits per 10 bits.
Having 'cut my teeth' on North Star BASIC's BCD floating point representation of numbers ( "1 + 2 = 3"), with all numbers, even integers, stored as 5-byte 8-digit BCD floating-point numbers internally. It used to drive me nuts to use Lawrence Livermore Laboratories' BASIC which used binary floating-point. You'd get things like "1 + 2 = 2.99999998" (or similar, I'm working from a 40-years ago memory here).
LATER UPDATE: Sorry, I omitted the words 'floating point'. No wonder you said 'huh?'. It should have read 'Binary floating point '3' is inexact'.