Syndicate content

     

Terms of Site Index

Find the page/content you are looking for with our index.

  • action
  • COM
  • number

    All software make use of numbers. Everything is a number. The most basic number in a computer is 0 or 1. This is called a bit. These are represented with electricity. Although in most cases we see it as 0 - Ground and 1 - Voltage (i.e. 1 volt), the bit representation in software and in hardware may be interpreted either way (i.e. a 0 could mean that the voltage is 1V and not 0V.)

    Combining these zeroes and ones we offer end users to handle much larger numbers. With 8 bits, you can have numbers from 0 to 255 (unsigned) or -128 to +127 (signed.) Now a day, computers can handle a much larger number of bits in one cycle. Most processors use 64 bits but they can calculate numbers on 128, 256, and for some 1024 bits at once. Also with parallelism, the size can be viewed as even larger (i.e. handling a 64 bit number in 1,536 threads like on my old nVidra Quadro 600 is equivalent to one large number of 98,304 bits! That would be 2 power 98,304 possibilitie or about 2.8359e+29592 in decimal.)

    Integers are easy to handle. Although when working on math problems you generally see the set of avaialble numbers as equivalent to N although mathematicians know that computers can really only handle a limited set of numbers. For example, on a 64 bit computer, the usual range is -9223372036854775808 to 9223372036854775807, This is generally enough although at times some equations have to be reworked to avoid really large or small intermediate numbers that work fine in math equations, but not so well on computers.

    Now, math also includes other sets of numbers such as D, R, and C. Computers do not offer any way to represent numbers in R or C but they can offer D to some extend. These numbers are called floating point numbers because we do math using an exponent. The exponent makes the decimal point "float" in any location as the number used for the exponent offers. Using a 64 bit floating point, you can have positive and negative numbers with precision varing betwee 10-308 and 10+308. This includes a positive zero (+0) and a negative zero (-0), which is import in a few cases (although +0 = -0 is true, you can get the sign of a number and distinguish both zeroes). Note that at first decimal numbers were going to also have a positive and negative zero, but it was instead decided to have one more negative number (remember, with 8 bits we have signed numbers from -128 to +127, this is because in the positive numbers we have a 0 which we don't have in the negative numbers.)

  • pat
  • strong