Does anyone know if there's any structure or reason behind binary code? It all seems pretty random. Is it in any way readable, or is it just the closest possible translation of the hardware?
Offline
http://www.youtuberepeat.com/watch?v=Ia9N_wZaoa4
Hilarious. Hope the battery part isn't inappropriate, though.

Offline
It follows the same structure as base 10 (that is, decimal, the system almost everyone counts in). In base ten, you have the "ones place", the "tens place", the "hundreds place", etc. Notice anything about them? They are all a power of 10.
10^0=1
10^1=10
10^2=100
10^3=1000
Binary code works the same way but in powers of two.
2^0=1
2^0=2
2^0=4
2^0=8
2^0=16
So: 1011 is equal to
(1*(2^3) )+(0*(2^2) )+(1*(2^1) )+(1*(2^0) ) = 8+2+1 = 11
2^3 2^2 2^1 2^0
1 0 1 1
As to the "reason" for it, it is simply far easier/efficient for computers to run in base two. Computers are based on the concept of true and false, on and off, 1 and 0. A "bit" is the smallest unit of memory and is one switch that is either on or off (1 or 0). A byte is 8 bits, and can store any number from 00000000 (zero) to 11111111 (511). Also, the use of true/false bits makes addition, subtraction, multiplication, and division a lot easier for a computer. Computers don't do arithmetic the way humans do. They have to think of everything as a product of certain operators, such as the AND, OR, NOT, and XOR operators. All of these operators can only return two values, true and false.
Now you probably know more than you wanted too
Last edited by MoreGamesNow (2011-10-21 20:45:02)
Offline
MoreGamesNow wrote:
It follows the same structure as base 10 (that is, decimal, the system almost everyone counts in). In base ten, you have the "ones place", the "tens place", the "hundreds place", etc. Notice anything about them? They are all a power of 10.
10^0=1
10^1=10
10^2=100
10^3=1000
Binary code works the same way but in powers of two.
2^0=1
2^1=2
2^2=4
2^3=8
2^4=16
So: 1011 is equal to
(1*(2^3) )+(0*(2^2) )+(1*(2^1) )+(1*(2^0) ) = 8+2+1 = 11
2^3 2^2 2^1 2^0
1 0 1 1
As to the "reason" for it, it is simply far easier/efficient for computers to run in base two. Computers are based on the concept of true and false, on and off, 1 and 0. A "bit" is the smallest unit of memory and is one switch that is either on or off (1 or 0). A byte is 8 bits, and can store any number from 00000000 (zero) to 11111111 (511). Also, the use of true/false bits makes addition, subtraction, multiplication, and division a lot easier for a computer. Computers don't do arithmetic the way humans do. They have to think of everything as a product of certain operators, such as the AND, OR, NOT, and XOR operators. All of these operators can only return two values, true and false.
Now you probably know more than you wanted too![]()
Fixed.
Offline
ssss wrote:
Fixed.
Oops. That's what I get for being lazy and copying and pasting.
Anyway, thanks for the correction.
Offline
Okay. Thanks. So it's like an array from right-to-left, 2^number of place in array. But aren't programs at the most basic level written in binary code, too (I'm not planning on learning it as a new programming language, just curious)? So… how does that work? And there are also characters. There's binary code for letters (I wonder if they encoded secret messages in binary code in the early days of computers).
Offline
maxskywalker wrote:
Okay. Thanks. So it's like an array from right-to-left, 2^number of place in array. But aren't programs at the most basic level written in binary code, too (I'm not planning on learning it as a new programming language, just curious)? So… how does that work? And there are also characters. There's binary code for letters (I wonder if they encoded secret messages in binary code in the early days of computers).
Some made in binary code to make programming for others easier, and then other apps are made with apps made in binary code. (My guess)

Offline
Uh-oh, you're out of my realm of knowledge. I risk mis-informing you, but:
Programming in its most basic form consists of calling for pieces of memory from one part of the computer, putting them in another, and running bits through a complex series of gates (NOT, OR, etc. as mentioned above). However programmers don't deal with these gates usually. Rather a "chip" is a complex set of gates that preform a specific function. For instance, a "single-bit adder chip" adds two bits together. One bit can only store a value of a zero or a one. The possible results are below.
0+0=0
0+1=1
1+0=1
1+1=10
However, a 1-bit adder can only return one bit of data (2 bits go in, one bit goes out), so 1 + 1 in a 1-bit adder is zero
1 + 1 = 0
This gate is easy, just use the XOR gate. XOR returns a 1 if either of the bits is 1, but not both of them ( (OR) AND (NOT AND) ).
However, if you make it a 2-bit adder it gets more difficult (you have to carry over a 1 to the "2s place" if the "1s place" digits are both one. For example:
01+01=10 (1+1=2)
11+01=00 (3+1=4) (4 = 100 in binary code)
11+11=10 (3+3=6) (6 = 110 in binary code)
I once built (on paper, not really "built") logic gates for 4-bit addition, and it doesn't get any easier as you go up
.
You computer has either a 32-bit or a 64-bit processor. This means that it has a 32-bit (or 64-bit) adder, subtracter, multiplyer, divider, etc. In effect, this means that adding two 32-bit variables takes no more effort for the computer than adding two bytes (8-bits).
Why did I tell you that? Well... I felt a need to digress and over-inform you.
--------------------------
Basic programming (not the most basic) tells the computer where to move numbers and what to do with them. This is called machine code / assembly language. It is a step above binary. Doesn't really answer your question, but since you seemed interested in very very basic programming, there is a small bit of it. The only part I (think) I understand
Last edited by MoreGamesNow (2011-10-22 20:28:15)
Offline
01011001011011110111010100100000011001000110010101100011011011110110010001100
10101100100001000000110001001101001011011100110000101110010011110010010000001
11010001100101011110000111010000101110001000000101001101101111001000000111100
10110111101110101001000000110011101100101011101000010000001100011011000010110
10110110010100101100001000000110001001110101011101000010000001111001011011110
11101010010000001100001011011000111001101101111001000000110010001101111011011
10001001110111010000100000011000100110010101100011011000010111010101110011011
00101001000000110100101110100011100110010000001100001001000000110110001101001
0110010100101110
Last edited by ProgrammingPro01 (2011-10-22 19:14:03)


Offline
Death_Wish wrote:
maxskywalker wrote:
Okay. Thanks. So it's like an array from right-to-left, 2^number of place in array. But aren't programs at the most basic level written in binary code, too (I'm not planning on learning it as a new programming language, just curious)? So… how does that work? And there are also characters. There's binary code for letters (I wonder if they encoded secret messages in binary code in the early days of computers).
Some made in binary code to make programming for others easier, and then other apps are made with apps made in binary code. (My guess)
Well yeah, I know. Most things trace back to C, and everything traces back to Assembly Language, which is made in Binary Code.
Offline
ProgrammingPro01 wrote:
01011001011011110111010100100000011001000110010101100011011011110110010001100
10101100100001000000110001001101001011011100110000101110010011110010010000001
11010001100101011110000111010000101110001000000101001101101111001000000111100
10110111101110101001000000110011101100101011101000010000001100011011000010110
10110110010100101100001000000110001001110101011101000010000001111001011011110
11101010010000001100001011011000111001101101111001000000110010001101111011011
10001001110111010000100000011000100110010101100011011000010111010101110011011
00101001000000110100101110100011100110010000001100001001000000110110001101001
0110010100101110
Cake
Offline
This reminds me of a quote I saw somewhere:
There are 10 types of people: those who understand binary, and those who don't.
xD
Offline
kimmy123 wrote:
cpumaster930 wrote:
This reminds me of a quote I saw somewhere:
There are 10 types of people: those who understand binary, and those who don't.
xD
What does that mean?
I presume you fall into the second category.
10 in binary = 2 in decimal.
Offline
all i know is that computers use long lines of binary and that in computer language 1 pretty much means yes/on and 0 means the exact opposite
Offline
cpumaster930 wrote:
kimmy123 wrote:
cpumaster930 wrote:
This reminds me of a quote I saw somewhere:
"There are 10 types of people: those who understand binary, and those who don't."
xDWhat does that mean?
I presume you fall into the second category.
![]()
10 in binary = 2 in decimal.![]()
lol, I remember that quote XD
Offline
cpumaster930 wrote:
This reminds me of a quote I saw somewhere:
There are 10 types of people: those who understand binary, and those who don't.
xD
xD
Offline
ASCII is the most often used formatting for text. Each letter is one byte, which is 8 bits, resulting in 256 possible different combinations. Each number (00000001, 00000010, 00000011, etc) stands for either a letter, a number, or a symbol
Offline
it works so:
there are two kinds of programming languages: low-level and high-level
low-level, like assembly, machine-code, and sometimes also C, provide larger control about the processes and the memory allocating, but have harder syntax, require more code lines, and only good if you want to do something fast, as you can control memory allocating yourself. (no garbage-collector!)
high-level, like java, c#, python(yay!), lisp, squeak, ruby etc have better syntax, and do much of the dirty work for you, without having to write a single line of code. they also don't require (as much) porting work.
all those languages are compiled, or interpreted.
compiled languages, like, BASIC or C++, are getting compiled, means they're getting translated into bit code (binary code, machine code, all the same
), and run the code directly, giving the CPU etc. commands alone.
interpreted, like python or squeak, (are sometimes compiled to byte code and then) get interpreted in a VM
Offline
Thanks. Still interesting that not one person has even mentioned really doing something with machine code.
Offline
maxskywalker wrote:
Thanks. Still interesting that not one person has even mentioned really doing something with machine code.
Probably because it is rarely done now. In the infancy of computer programming you had no choice, but now C (or something similar) gives you great control without forcing you to stoop to the binary thinking of a computer. I'd be surprised if anyone who has posted knows machine code. The thinking is: why take hours to program something simple when I could take a fraction of the time using a "higher" language?
Last edited by MoreGamesNow (2011-10-25 17:07:00)
Offline
MoreGamesNow wrote:
maxskywalker wrote:
Thanks. Still interesting that not one person has even mentioned really doing something with machine code.
Probably because it is rarely done now. In the infancy of computer programming you had no choice, but now C (or something similar) gives you great control without forcing you to stoop to the binary thinking of a computer. I'd be surprised if anyone who has posted knows machine code. The thinking is: why take hours to program something simple when I could take a fraction of the time using a "higher" language?
Right. Just wondering.
Offline