CIS3355:
Business Data Structures |
How do we deal with issues of precision? We have already defined precision as the degree of accuracy to which we can represent a value. Obviously, we would like to represent all values as precisely as possible, but we know that we can't: As far as we know,
the value of pi is an infinite number Let's review how we isolated the components:
The component in which we store the sequence of digits is called the mantissa (by definition: The decimal part of a logarithm. In the logarithm 2.95424, the mantissa is 0.95424). Because we have normalized the number (by putting the decimal point in front of the first significant digit), we are capturing the entire sequence of digits. I don't see what this has to do with precision ??? The more bits we allocate to the mantissa component of a real number, the more precisely we can represent the number. Let's consider the √2 (The square root of two). Here are the first 1,000 digits in the mantissa:
If that is not precise enough for you, NASA can provide you with the first 5 million digits in the mantissa. How many bits will we need to represent a number like that ?? Good question. Let's assume that we wanted to store (unsigned) integers:
Basically, what we are saying is that I wanted to store a number to 300 digits of precision, I would need 300 bits. Wait !!! You are saying that with 10 bits, you can represent integers as large as 1023 (I understand that). But then you say you only get a precision level of 3 digits. There are 4 digits in the value 1023 !! Quite true. However, I CAN NOT represent all 4 digit numbers. For instance, I can not represent the integer 1241 with only 10 bits. I CAN, however, represent all numbers to 3 decimals of precision (which is why we state that with 10 digits, I have 3 decimal points of precision). It would be more accurate to state that given 10 bits, I can represent ALL three-digit numbers and SOME four-digit numbers. OK. But how do I deal with issues of magnitude ?? That will be our next tutorial.
|