In this section we'll be taking a look at some of the basic binary algebra concepts necessary for understanding how digital equipment works. For those of you who completely lack these notions, our advice is to spend a little more time on this section, whereas those of you who are already familiar with the topic can go straight to the following section.
After this necessary premise, let's move onto the question at hand. In day to day life we are used to working with decimal numbers, and when working with computers we also use this kind of notation. However, computers have a conversion system which translates decimal numbers into a completely different notation, one which is suitable for the functioning of the lowest layer of digital machines: the circuitry. This notation is called binary notation because it comprises just two possible symbols: 0 and 1. The reason why this kind of notation is used, lies in the way microprocessors of all kinds work. These are integrated circuits, in that they house millions of elements, each of which is capable of taking on two different electric states, until the next modification takes place. So, by associating the symbolic value 0 to one electric state and value 1 to the other, we could consider using these circuits to store information.
In the decimal system , each time a digit to the far right of a number reaches 9, when it gets further increased it returns to zero and increases the number to its left by one. The same principle holds true in binary notation but with the difference that one number goes back to zero when it is in its "1" state and increases by one unit. Binary digits are named bit. An example would serve us best to clarify the parallel relationship between the two numeric notations:
Table 18.1. Comparison between binary and decimal notation

If we look at the table we can see that all the numbers from 0 to 10 can be represented by using 4 bit, so if we wished to create a piece of equipment capable of memorizing a number from 1 to 10 we'd use 4 of the abovementioned circuits (each circuit is able to memorize one binary digit). Naturally when it comes to reallife application of these principles, the circuits actually become far larger and more complex. The circuits, as well as memorizing data also transfer it from one circuit to another so that manipulation can take place. A binary number consisting in n digits, allows 2^{n} decimal numbers to be represented, or rather, allows all the decimal numbers from 0 to 2^{n}1 to be represented. If we wish to represent digital numbers above this value we must add an extra bit to our initial binary number. Let's take a look at the previous table and see how these facts translate into concrete examples. Well, first of all we can see how in order to represent all the decimal numbers 0, 1, 2, 3 we only require 2 bit and indeed the previous formula told us that: 2^{2}=4. Likewise, to represent all the numbers 0, 1, 2, 3, 4, 5, 6, 7 we need 3 bit (2^{3}=8). The following table illustrates the number of bit necessary to represent decimal sequences.
Table 18.2. Binary numbers and representing bit

So for example, how many bits will I need to represent number 24? If we look at the table we can see that we need 5 bits. Seeing that the number of bits in the presentday devices is very high, different quantities are used in order avoid numbers with too many digits. Generally speaking one doesn't speak of bits as such, but rather of byte, where one byte equals 8 bits (another unit of measure also exists called nybble, but it is less commonly used). In practice, we'd use quantities that are multiples of bytes.
Table 18.3. Binary quantities
