In computer programming languages, consider Integers and consider Floats (Floating point numbers, i.e., numbers with decimal points, i.e., Float, Double, Long Double, etc).
OK, so let's suppose we define an Integer variable (in whatever language)...
int MyInt = 100;
And now, let's suppose that we want to divide it (using non-integer division!) by 3...
MyInt = MyInt / 3;
Well, now we have a bit of a problem!
You see, even though the answer should be 33.33333 (repeating) -- MyInt can only hold an Integer value!
It can only hold 33!
Some languages will permit this operation -- and the result of the operation will be 33 -- which is the wrong answer.
Some languages will prohibit this operation.
But the point is, is that true division, not explicitly integer division -- is an operation.
Some operations/operators -- make sense to perform on data that is of a specific type -- and some do not!
It's perfectly OK to perform true divison on a floating point type (well, ignoring division by zero, which creates problems no matter what!) and put that value back into that floating point type -- but it doesn't make sense to perform true divison on an integer -- and put that value (now wrong!) back into the integer!
At least not without an explicit typecast -- which tells the compiler "I am OK performing this non-standard operation on this type -- I am OK with the side-effects..."
So that's one example.
Another example is adding an integer value -- to a string.
Another example is concatenating, or performing another string operation -- to an integer...
Basic understanding is this -- types prevent operations on data -- where it doesn't make sense to perform the operation on the given data type!
A type determines a subset -- of the set of all operations (which are basically functions!) possible that "make sense" to be applied to them!
So types are in fact subsets!
Subsets of possibilities, subsets of various amounts of bits and bytes, subsets of operations/functions which make sense to be permitted on those types!
Why are types -- used at all?
In Computer Programming or in Mathematics?
Surely types -- must have some purpose -- otherwise, WHY are they used at all?
?