Computers are millions of pieces of hardware that are in one of two states
These states are represented by 0/1
You don't have to know what the hardware is doing or even what the hardware technology is, or whether a 0 is represented by high or low voltage. You (or the computer) will just be told that the particular piece of hardware is in the 0 state or the 1 state.
Some hardware maintains its state without power e.g.
Most hardware looses its state when switched off e.g.
how much memory is in a typical hard disk, flash disk, floppy disk, RAM? [1]
Since there are only two states (two = bi), the state is represented by a binary digit (called a bit). A bit then either is 0 or 1.
The number system we're used to is called decimal. The base of the decimal system is 10. Number systems used in computing are
bit, wikipedia (http://en.wikipedia.org/wiki/Bit).
Computers crunch numbers. We (people) use the decimal number system. Computers only have bits and use these bits to make binary numbers. There are no decimal numbers inside a computer. Binary numbers are transformed by software to a decimal representation on the screen for us.
Before launching into binary numbers, lets refresh our memories on the positional nature of the representation of decimal numbers.
102=1*100 + 0*10 + 2*1 =1*10^2 + 0*10^1 + 2*10^0 |
The value represented by each digit in the number 102 depends on it's position. The "1" represents "100". If the "1" was in the rightmost position it would represent "1". Each time a digit moves 1 place to the left, it increases the number it represents by a factor of the base of the number system, here 10. The left most digits represent the biggest part of the number.
binary numeral system, wikipedia (http://en.wikipedia.org/wiki/Binary_numeral_system).
In a binary number, the base is two, so the number prepresented by each position increases by a factor of 2 as you move left in the number. In binary, you need two numbers to represent all the available values. Here are the two numbers and their decimal equivalents.
binary decimal 0 = 0 1 = 1 |
Here's a binary number. What does it represent?
1101 = 1*2^3 + 1*2^2 + 0*2^1 + 1*2^1 = 1*8 + 1*4 + 0*2 + 1*1 = 13 decimal |
As with decimal, the left most digit carries the most value (in 11012, the left most digit represents 810).
leading zeroes behave the same as for decimal numbers
00=0 01=1 |
Here's some more binary numbers
binary decimal 10=2 11=3 (2+1) 100=4 101=5 (4+1) 0101=5 00000101=5 1001=9 (8+1) 1011=11 (9+2,8+3)) 11011=27 (16+11, 24+3) 11111=31 |
what is 10112 in decimal? [2]
Let's go the other way, decimal to binary
what is 710 in binary? Do it by stepwise division.
Start with the power of 2 just below your number. Take a guess. 2^3=8. This is bigger than 7. Try next one down. 2^2=4. This is less than 7. Start there 7/4=1, remainder=3 do it again, the power of 2 just below 3 is 2 3/2=1, remainder=1 with the remainder being 0 or 1, we're finished 7=1*2^2 + 1*2^1 + 1^2^0 7 decimal = 111 binary |
In the above exercise, would anything have gone wrong if you'd started dividing by 8 rather than dividing by 4? No, you would have got the answer 710=01112. The leading zero has no effect on the number, so you would still have got a right answer, you just would have done an extra step.
what is 1510 in binary? [3]
Here's python code to convert decimal to binary (http://www.daniweb.com/code/snippet285.html).
Here's bash code to convert binary to decimal. If you aren't already in a bash shell, start one up in your terminal with the command /bin/bash. On windows, click on the Cygwin icon to get a bash prompt. (comments in bash start with #). The code here is from Bash scripting Tutorial (http://www.linuxconfig.org/Bash_scripting_Tutorial) in the section on arithmetic operations. (Knowing that bash can convert binary to decimal, I found this code with google using the search terms "convert binary decimal bash").
declare -i result #declare result to be an integer result=2#1111 #give result the decimal value of 1111 to base 2 echo $result #output the decimal value to the screen 15 # same code in one line declare -i result;result=2#1111; echo $result 15 |
In the previous section, I showed bash code I found on the internet. When writing a new program, you'll often need some functionality that's already been coded up (after 50yrs of computing, there's not much that hasn't aleady been coded up) that's available in books or on the internet. Books on any computer language will have lots of worked examples. It's often faster to find debugged and working code, than it is to write it yourself from scratch. You're expected to do this and everyone does it. For a class exercise, you may have to nut it out yourself, but when it comes to writing working code, you borrow when you can. When you use someone else's code, you should document where you got it.
byte, wikipedia (http://en.wikipedia.org/wiki/Byte).
It turns out to be convenient (hardware wise) to manipulate bits 8 at a time. 8 bits = a byte(B). Most PCs (in 2008) are 32 bit machines, meaning that the CPU manipulates 4 bytes (32 bits) at a time. Since these machines are running at somewhere between 100MHz and 2GHz, then they are doing between 400 and 8000 million byte operations/sec.
some 1Byte numbers expressed in decimal
byte decimal 00001000=8 00001111=15 00010000=16 00100000=32 01000000=64 10000001=129 11111111=255 |
1 Byte can represent integers from 0-255
Bytes are always 8 bits. However data is shifted around according to the bus width of a computer. A 32 bit computer has 32 bit registers and 32 lines for addressing and fetching data. It can transfer data and instructions 4 bytes at a time. A term from the HPC (high performance computing or supercomputers) world, where 64 bit computing has been the standard for 30 yrs, the term word describes the width/size of a piece of data/instruction. In the HPC world, there are words of all sorts of lengths, including 128-bit. A 32-bit computer has a 32-bit (4 byte) word size.
Let's do some binary addition: The rules are similar to decimal. Here's the addition rules.
0+0=0 1+0=1 1+1=0 with carry, 1+1=10 |
here's the addition rules in table form.
addition carry + | 0 1 + | 0 1 ------- ------- 0 | 0 1 0 | 0 0 1 | 1 0 1 | 0 1 |
Note | |
---|---|
1+1=0 with a carry. 1+1!=10 (!= not equal). The extra leftmost digit, the "1" (as in "1+1=10") becomes the carry digit. It's handled separately by the computer. If you want it, you have to go find it. The equivalent table in decimal for a one digit computer would show that 8+7=5 (and not 15) |
worked example, working from right to left, one bit at a time
111 (what demical numbers are being added here?) 010 + --- 01 1 carry 001 1 carry 1001 |
What is 1001+00112 [4] ? What decimal numbers are represented?
Note | |
---|---|
End Lesson 1 |
Algorithm, wikipedia (http://en.wikipedia.org/wiki/Algorithm).
Algorithm: A predetermined sequence of instructions (events) that will lead to a result.
e.g.the algorithm of right-to-left addition will lead to the sum of two numbers.
e.g.putting your books into your school bag in the morning will lead to them all being at school with you the next morning.
What country do we get the word "algorithm"? What does "al" mean? What other words start with "al" used in the same way? [5]
Some algorithms are better (faster, more efficient, scale better) than others. While computers are fast enough that even a bad algorithm can process a small number of calculations, it takes a good algorithm to process large numbers of calculations. Computers are expensive and it's only worth using a computer if you're doing large numbers of calculations. We're always interested in how an algorithm scales with problem size. So we're always interested in the goodness of the algorithm being used.
The measure of goodness of an algorithm is the rate at which the time for it's execution increases as the problem size increases (i.e. how the algorithm scales). If you double the size of the problem, does the execution time not change (independant of problem size), double (linear in problem size) or quadruple (scales as the square of the problem size)?
This change in execution time with problem size is called the Order of the algorithm.
Speed (Order) of the addition operation:
addition one bit at a time takes 8 steps for addition of 2 bytes. The Order of addition for addition, one step at a time, is O(n) i.e.time taken is proportional to n=number of bits.
There's a faster way: add the bits in parallel and handle the carries in a 2nd step
111 010 --- 101 add 10 carry --- 1001 |
Addition done in parallel takes 2 steps, no matter how long the numbers being added (you need to increase the amount of hardware, but you only have to pay for the hardware once and the extra cost can be amortised over many addition operations).
The Order of parallel addition is O(1) i.e.is proportional to 1 i.e.addition takes the same time independant of the number of bits being added. The parallel addition algorithm is much faster than the stepwise mechanism.
Note | |
---|---|
amortise: If one process is faster/better than another, but the faster process is more expensive but only has to be paid for once, and you can use the process as many times as you want, then you are are said to be amortising the extra cost over a the life of the machine. e.g. cost of stepwize adder=$10. cost of parallel adder=$20. If you're going to be doing 1015 additions before you retire the machine, the amortised extra cost of the parallel adder is 10-14$/addition. Most people will accept this extra cost, because of the increased speed will save them money for each run of the program. The most common place that amortisation is used in computing is in the cost of writing a program. Writing a program is expensive; you have to pay the programmers salary, benefits, heating/AC, electricity and buy computers and lease a premises to do this. A program may cost $1k-$100M to write. However if the program is run millions of times, the cost/run for the end user may be insignificant compared to the cost of paying their staff to run the program. In this case the costs of the programmer's time is said to be amortised over the number of times the program will be run. Because writing programs is so expensive, you only write programs that are going to be run many times. |
Here's the time/steps for the two types of addition for different problem sizes
Stepwise Parallel 1 2 2 2 4 2 3 6 2 4 8 2 . . 16 32 2 |
We don't really care what the constant of proportion is, i.e.we don't care if each step takes 1usec, 1msec or 1sec, only how the time to completion scales with problem size. We know that if we scale the problem by some large number (e.g.106), that the constant of proportionality will be swamped by the problem size. Let's say that parallel addition took 8 steps instead of 2. Here's the new times for addition.
bits Stepwise Parallel 1 2 8 2 4 8 3 6 8 4 8 8 . . 16 32 8 |
We only need to get to numbers of length 8 bits to be ahead with parallel addition.
say we have a 4bit computer, what is
1010 0110+ ---- 0000 with 1 carry |
? When you do this, a digit rolls off the left hand end.
This is called overflow Since you only have a 4bit computer, you'll get an erroneous answer. In some circumstances you'll get an error message on overflow and in other situations you won't. Since overflow is part of the design of a computer, it is expected and is not neccessarily regarded as an error. (e.g. a drinking glass will only hold so much fluid. If you put in more fluid, it will overflow. People accept overflow as part of the design of a drinking glass.)
With addition, you have to anticipate getting a number that is 1 bit longer than the two numbers being added, i.e. you'll get a 1 bit overflow.
the rules are similar to decimal - here's the 1 times table (there's no carry on binary multiplication)
* | 0 1 ------- 0 | 0 0 1 | 0 1 |
Multiplication by 1,10,100 left shifts the digits by 0,1,2...n places (i.e. by adding a '0' on the righthand end). This is the same whether you're working in decimal or binary. Left shifting is a fast operation in a computer. When the computer detects that you're multiplying by a power of 2, it left shifts the number, rather than using the multiply hardware.
Note | ||
---|---|---|
what numbers are represented in each case? |
Left shifting produces overflow. Assume a 4bit computer
1100*10 =1000 (not 11000) 1100*100=0000 (not 110000) |
If a 0 overflows, you get the correct result.
Computers do multiplication the same way we do, one digit at a time, but add the results in parallel.
what is
1010 11 x ---- 1010 1010 ---- 11110 addition 0000 carry ----- 11110 |
again
1010 1101x ---- 1010 0000 1010 1010 ------- 1110010 1 carry (a couple of rounds) ------- 100000010 the carries are done in parallel multiplication is fast and is O(1). |
what is 10102x01102? [6]
what is 11002x10012? [7]
It is not neccessary to have a separate set of hardware for subtraction. You could in principle add the negative of the number, but we don't have negative numbers (hardware only knows about 0-255 and has no concept of positive or negative numbers). (Software can interpret certain values of a byte as being negative, but we haven't got that far yet.) However we don't need negative numbers for substraction. Instead we add the complement (for binary, it's the twos complement: the twos complement instruction is fast - one clock cycle).
Let's look at subtraction by adding the complement using a decimal example.
Let's say we want to do
9-3=6 |
and we have a 1 decimal digit (dit) computer. Here's the addition table for a 1 dit computer.
+ | 0 1 2 3 4 5 6 7 8 9 ----------------------- 0 | 0 1 2 3 4 5 6 7 8 9 1 | 1 2 3 4 5 6 7 8 9 0 2 | 2 3 4 5 6 7 8 9 0 1 3 | 3 4 5 6 7 8 9 0 1 2 4 | 4 5 6 7 8 9 0 1 2 3 5 | 5 6 7 8 9 0 1 2 3 4 6 | 6 7 8 9 0 1 2 3 4 5 7 | 7 8 9 0 1 2 3 4 5 6 8 | 8 9 0 1 2 3 4 5 6 7 9 | 9 0 1 2 3 4 5 6 7 8 |
Note | |
---|---|
remember, because we have a one digit machine, 9+9=8. (9+9!=18, the 1 overflows.) |
If we're going to do the substraction 9-3=6, by adding some other number, what number do we add to 9 to get 6? Looking at the addition table, we find we have to add 7.
9-3=6 what we want 9+?=6 what we're looking for 9+7=6 the answer from looking up the addition table above |
The ten's complement of 3 then is 7.
What if we want to subtract 3 from any other number, say 8? If we want to do 8-3=5, by adding a number to 8, on looking at the addition table, we have to add 7. So whenever we want to subtract 3, instead we add 7.
8-3=5 what we want 8+?=5 what we're looking for 8+7=5 the answer from lookup up the addition table above. |
We find the ten's complement of 3 is 7 no matter what number we subtract 3 from. Making the complement of a number only depends on the number, not what we subtract it from.
What is the tens complement of 8?
9-8=1 9+?=1 9+2=1 |
the complement of 8 is 2.
What's the complement of 9?
9-9=0 9+?=0 9+1=0 |
the complement of 9 is 1.
What is the complement of 0,4? [8]
Overflow isn't an advantage or a disadvantage; it's just part of the design of a computer. Since we have overflow, we can use it to do subtraction by addition of the complement, rather than having to build a subtractor into the hardware.
Here's the decimal (10s or tens) complement table
number complement 0 0 1 9 2 8 3 7 4 6 5 5 6 4 7 3 8 2 9 1 |
The complement is the number you have to add to get a sum of 0. Note that the sum is not really 0; it's 10 but the left digit is lost through overflow. The complement of a number then is
complement=(base of the system, here 10)-(the number).
But in a 1 dit computer you don't have two digits to make a 10. Instead if you want the ten's complement of 7, you ask the computer to come in 3 places from the biggest number (here 9), i.e.8,7,6 giving 6 and then you add 1 giving 7.
Note | |
---|---|
Subtracting from 10 or marching in from 9 are the same to you, but if you're wiring up a computer, you can't subtract from 10, but you can count in from 9. |
Summary: if we're going to do 9-3, we add 10 to the -3, giving +7. The answer we'll get by doing 9+7 will be 10 more than what we want. However the 10 will be lost through overflow (subtracting 10 from the result), giving us the correct answer.
what we wanted 9-3=6 what the computer did (added 10 to both sides) 9+(-3+10)=10+6 the answer the computer gave us 6 |
What is the (twos) complement of the 2 bit number 012?
find some number that when added to 012 gives 002 (just start poking in numbers till you get the required answer).
01 #decimal 1 11 #decimal 3 ---- 00 Note: overflow is needed to get 00 |
See any connection between 1 and its complement 3, in a 2 bit system [9] ?
A 4 bit number system has base 16. See any connection between the values of the number-complement pairs and a 4 bit number system in the following examples? Using brute force, what's the twos complement of the 4 bit numbers
let's try an example in a regular 8 bit byte. Using brute force, what's the twos complement of 01000110? (for labelling, let's use the words minuend, subtrahend and difference.)
01000101 subtrahend 10111011 complement -------- 00000000 |
How do you make the complement in binary?
Following the decimal examples (above), to get the complement, you count in from the end number (1 or 0) by 1 number (i.e. you flip the bits), shown here
01000101 original subtrahend 10111010 bit flipped subtrahend 10111011 known twos complement |
Note | |
---|---|
By looking at the bit flipped number and the twos complement, you can see that you have to add 1 (as is done for the decimal example). |
10111011 bit flipped subtrahend + 1 |
binary complement=(bit flipped subtrahend + 1)
we've found the complement (the -ve + the base number of the system)
01000101 subtrahend (decimal 69) 10111011 complement, (decimal 187) |
What's the sum of the 8 bit subtrahend and its complement [14]
with the complement, we can do the subtraction.
10000000 minuend (decimal 128) 01000101- subtrahend (decimal 69) -------- 10111010 bit flipped subtrahend 10111011 bit flipped subtrahend +1 = complement of 69 |
Do the subtraction by adding the two's complement
10000000 minuend (decimal 128) 10111011 complement of decimal 69 --------- 00111011 difference (left bit rolls overflows on an 8 bit computer) result: binary decimal 10000000 128 01000101- 69- ------- --- 00111011 59 |
using the twos complement to do subtraction, what is
Note | |
---|---|
End Lesson 2. Some kids didn't get the material on the complement and didn't complete the excercises. I added more exercises and started Lesson 3 at the beginning of binary subtraction. |
Most people can only do 4bits of binary in their head. You either go to hexadecimal (below) or use a binary calculator. Fire up a terminal and try a few examples (you can recall the last line with the uparrow key and edit it without having to type in the whole line again).
bc (basic calculator?) is a general purpose calculator. Using a terminal, try some standard arithmetic e.g.(+-/*).
echo "3*4" | bc 12 |
bc does all input and output in decimal, until you tell it otherwise.
Here's a few binary examples.
#input will be in binary, output is decimal since you haven't changed output echo "ibase=2; 1100-101" | bc 7 #with obase explicitly set (not needed if obase is 10) echo "obase=10;ibase=2; 1100-101" | bc 7 #same problem, output in binary echo "obase=2;ibase=2; 1100-101" | bc 111 #convert decimal to binary echo "obase=2; 17" | bc 10001 #other examples: echo "obase=10;ibase=2; 1100+101" | bc 17 echo "obase=2;ibase=2; 1100+101" | bc 10001 |
Exercises: Hint - the number(s) you're processing are in the last instruction on the line. Before you run the instruction, figure out the base for the input and for the output and then decide whether you need to set obase and/or ibase.
The normal order is obase/ibase. What happens if you reverse the order of obase and ibase without changing their values?
echo "obase=10;ibase=2;01110001" |bc 113 echo "ibase=2;obase=10;01110001" |bc 1110001 |
bc defaults to decimal input. The first command interprets obase as 1010 and ibase as 210 (i.e. binary). The input will be intepreted as binary and output will be in decimal. In the second command, obase says that all further input will be interpreted as base 210 (i.e. binary). Thus the obase value is 102 (210), i.e. the answer will be in binary.
To minimise suprises, use obase first, leaving the input decimal, then input the value for ibase in decimal.
For the length of numbers used in a computer, binary is cumbersome. Unless you really want to know the state of a particular bit, you use hexadecimal (a number system with base 16), which uses 1 symbol for 4 bits, and runs from 0..f (or 0..F)
binary hex decimal 0000 0 0 0001 1 1 0010 2 2 . . 1000 8 8 1001 9 9 1010 a or A 10 1011 b or B 11 1100 c or C 12 1101 d or D 13 1110 e or E 14 1111 f or F 15 |
When input to a computer is ambiguous as to its value, hexadecimal is represented as "OxF" or "Fh" (preceded by "Ox" or postceded by "h").
Here's conversion of hexadecimal to decimal using bash
declare -i result #declare result to be an integer result=16#ffff #give result the value in decimal of hex ffff echo $result #echo $result to the screen 65535 #or all in one line declare -i result;result=16#ffff; echo $result 65535 |
using bc
echo "obase=16;ibase=16; F+F" | bc 1E |
Note | |
---|---|
End Lesson 3. Spent some time in the first half of the class going through the twos complement exercises which I added after lesson 2. I asked the kids to try the following exercises for homework. They didn't do them so I started with the decimal/binary/hex table above and then worked them through the exercises below, at the start of the class. |
Using any method
Note | |
---|---|
Base 256 logically belongs here, but since you don't need it to start programming, and the introductory part of this course is long enough, I'll do it some time later. The material is at Base 256 |
Note | |
---|---|
Integer division logically belongs here, but since you don't need it to start programming, and the introductory part of this course is long enough, I'll do it some time later. The material is at Integer Division |
[1]