How exactly do we go from Binary/Hex to Assembly Instruction sets?

So i've been trying to learn some Embedded/Assembly programming for a bit here lately, as well as going far as trying to learn the lowest level (gates and such). One thing puzzles me though. is how do we "get" instruction sets. I understand somewhat how gates/TTL and such works, but I don't see how we get from that to mov,add,clr etc. It's probably a stupid question. but I mean I think back to the first micro-processors/controllers and think. how exactly did they design an instruction set and make it work? edit: I guess for Clarity, pretend im talking about the first microprocessor, how did they go from Binary to actually creating an Instruction Set?

asked Sep 26, 2011 at 14:48 user6791 user6791

Every instruction has a value, the compiler converts your code into these instructions, based on how your code is structured.

Commented Sep 26, 2011 at 14:52

I'm not sure if I understand your question perfectly, but I think you can find your answer here, here or here.

Commented Sep 26, 2011 at 15:00 Commented Sep 26, 2011 at 15:33

how did they go from Binary to actually creating an Instruction Set? Actually, "they" didn't -- it's the other way around, at least generally. The CPU designer(s) determine the operations that the CPU will perform, then they create the instruction set from that, and then they map the instructions (mnemonics) to opcodes (binary machine code). @Scott Whitlock provided a good answer below, I just wanted to address the last part of your question because your assumption, in my experience at least, is backward.

Commented Sep 26, 2011 at 20:04

This really nice book: www1.idc.ac.il/tecs is what explained it all to me, most of the chapters are availble free online. You design your own chip (in a simple hardware descript language) from nand gates, then an assembler then a compiler, then write an os in the language your created! Amazing stuff, and as someone without a com sci degree it was time well spent for me!

Commented Aug 7, 2012 at 13:01

7 Answers 7

The heart of a CPU is the ALU. It's responsible for taking an instruction (like MOV) which is just some pre-defined series of binary digits, and also taking 0, 1, or 2 operands, and performing the applicable operation on them. The simplest instruction could be a NOP (no operation) which essentially does nothing. Another typical operation is ADD (adds two values).

The ALU reads and writes data from and to "registers". These are small memory locations internal to the CPU. Part of the instruction (2 to 3 bits for each input depending on how many registers you have) indicates which register to read from. There are units in the CPU external to the ALU that handle loading required data (and the instruction) from memory into registers, and writing the result from registers back into memory. The location to write the result will also be encoded into another 2 or 3 bits.

The choice of "op codes", which is the binary number that represents an operation, is not arbitrary. Well-chosen opcodes reduce the complexity of the ALU. Each bit or group of bits tends to enable and disable certain logic gate in the ALU. For instance, an ADD instruction would have to enable the output stage to choose the result of the internal adding logic. Likewise, a MUL instruction would choose the result of the multiply logic. Depending on the design, it's quite likely that both the adder and multiply circuits both actually perform their operation on the input operands, and it's just the output selection (what gets written to the output bits of the ALU) that changes.

answered Sep 26, 2011 at 15:27 Scott Whitlock Scott Whitlock 22k 5 5 gold badges 62 62 silver badges 89 89 bronze badges I guess what im asking, is how do they choose? And how do they actually go about assigning it? Commented Sep 26, 2011 at 15:31

@Sauron: Reread the 3rd paragraph again and try to understand binary arithmetic and digital circuits. I can represent an instruction set with 256 instructions by an 8bit line in a digital circuit. That means I need 8 io ports on my hardware devices to transfer every possible state (2 states per line ^ 8 lines = 256 possible states). The processing unit can then decide what to do with that digital input signal. A digital circuit means you have two hardware states: Hi and Lo, voltage or no voltage. That's where the binary comes from. Binary representation is the one closest to the metal.

Commented Sep 26, 2011 at 15:47

@Sauron - look up Digital Multiplexers to see how a digital circuit can choose from one of several values. If you have an 8-bit bus, you just need 8 of these binary digital multiplexers in parallel.

Commented Sep 26, 2011 at 15:58

@Falcon Ok. I think that makes more sense. I keep forgetting that in the end it all goes down to binary signals. and even an instruction such as "mov" is still represented as binary.

Commented Sep 26, 2011 at 16:05

@Sauron - Do some research on the CPU, it will help you understand, what op codes are and how they work. Why certain op codes are choosen is not important, even asking the "why" question, doesn't make a great deal of sense. Understanding how they are choosen might help you understand more about the CPU and its structure.

Commented Sep 26, 2011 at 16:15

I will take your question literally and discuss mostly microprocessors, not computers in general.

All computers have some sort of machine code. An instruction consists of an opcode and one or more operands. For example, the ADD instruction for the Intel 4004 (the very first microprocessor) was encoded as 1000RRRR where 1000 is the opcode for ADD and RRRR represented a register number 0-15 (0000-1111 in binary).

All other instructions that reference one of the 16 4-bit registers (such as INC, ISZ, LD, SUB, XCHG) also use the low 4-bits to encode the register number, and various encodings of the top 4-bits to specify the opcode. For example, ADD, SUB, LD and XCHG use opcodes 1000, 1001, 1010, and 1011 (all in binary) combined with the register field. So you can see how a pattern is used to simplify the logic.

The very first computer programs were written by hand, hand-encoding the 1's and 0's to create a program in machine language. This was then programmed into a ROM (Read-Only Memory). Now programs are generally written into electrically-erasable Flash memory, in the case of microcontrollers, or run out of RAM, if microprocessors. (The latter still needs some sort of read-only memory for booting.)

Machine language gets tedious real fast, so assembler programs were developed that take a mnemonic assembler language and translate it, usually one line of assembly code per instruction, into machine code. So instead of 10000001, one would write ADD R1.

But the very first assembler had to be written in machine code. Then it could be rewritten in its own assembler code, and the machine-language version used to assemble it the first time. After that, the program could assemble itself (this is called bootstrapping).

Since the first microprocessor was developed long after mainframes and minicomputers were around, and the 4004 wasn't really suited to running an assembler anyway, Intel probably wrote a cross-assembler that ran on one of its large computers, and translated the assembly code for the 4004 into a binary image that could be programmed into the ROM's.

answered Sep 26, 2011 at 21:08 9,611 1 1 gold badge 26 26 silver badges 41 41 bronze badges

A computer at a very low level can be represented by a data path and control. Googling these together may give you a lot of reading as it is fundamental in digital architecture/design.

I'll do my best to summarize:

As mentioned here, at the heart we have an ALU - what needs to be known about the ALU (and other parts of the CPU) is that it is reconfigurable for different operations. More specifically, we have the ability to reconfigure the data path, which is how parameters are fetched, what operation to do, and where they are stored afterwards. Imagine being able to manipulate these three things - this is our control.

So how do we accomplish this? As mentioned already again, we can take advantage of low level digital logic and create multiplexers for different paths. Multiplexers are controlled using a set of bits for input - where are these bits taken from? Encoded from the instructions themselves. The takeaway: Instructions like mov, add, etc are just a set of bits that tell a CPU how to configure its datapath for a particular operation. What you're reading (mov, add) is the human readable form (assembly language) and our program defines a procedure of datapaths and operations.

I apologize if this is an oversimplification of more complex processes to those who are more knowledgable in this area. FYI, the electrical engineering stack-exchange would be a great place to ask this question as it deals with very low level logic.

answered Sep 26, 2011 at 20:18 141 2 2 bronze badges

if I understand your question I dont understand how bin/hex or assembly are related.

I assume the meat of your question is how do I get from the basic gates, AND, OR, NOT to instructions like move, load, store, add, etc.

I have my own little teaching instruction set I made from scratch that has some of the details of say how an add and subtract work from basic gates and things like that http://github.com/dwelch67/lsasim.

Look for the Code (something something something) book from Petzold. May start way to elementary but walks you slowly from nothing to do with computers and electronics into binary, hex, and into the basic gates, etc.

Are you asking about how you would build one from scratch today, or how did they do it back in the day? Today you would start with a definition of the instruction set. you just sit down and write it out, you think about the types of instructions you must have the loads and stores and moves and alu stuff, and then how many registers, how big the registers, this in part affects the instruction size, you think about the instruction size.

Let me stop and ask you how do you write a program? Starting with a blank screen on a text editor? You have some idea of the task you are trying to solve the variables you might need, the functions, the programming language, etc. And each person is different but to some extent you do a little of this (say defining and writing functions), a little of that (making header files with reusable defines and enums and structs and stuff), and a little of the other thing (just code, fill the functions with variables and code). And you circulate around the different tasks, in the end you feel you have a balance of code size, speed, readability, quality, functionality, etc. No different with hardware design. hardware designers use programming languages (vhdl, verilog) as well and go through the same process, the programming is only slightly different to software, the primary difference is that different lines/modules of hardware programs execute simultaneously where software executes linearly.

Just like coming up with a software program you balance the desires, performance, size, features, etc. You might have some design rules forced on you either that you wanted or that your boss made you do, etc. As with software design well after the initial design into implementation you may find out that you have some big mistakes and have to go back to initial design and change the instruction set, big or small changes. Might get so far as to have a compiler and simulated processor to find you really needed a couple of specific instructions that dramatically improve the quality of the compiled code, performance, etc.

So you have invented some instruction set from scratch, you have used some experience with hardware design to group similar instructions with similar bit patterns so they can be more easily decoded, not just for hardware language programming but it saves on power and gates and all that good stuff. These days you would make some sort of a simulator, my understanding is ARM was a software simulator first then the hardware designs came later, dont know if that is true but sticking with that story. This depends on the team some teams may be just hardware folks and want to just get into programming in the hdl, some like me may want to do a little of both. These days good hardware language simulators are available so you dont have to build any hardware you compile and simulate and do much of your debugging with a hardware simulation program/package. the software team can develop assemblers and compilers for the instruction set and using simulated ram and rom feed programs to the simulated processor and put it through its paces. we have simulated a full linux boot on a processor I worked on not long ago, took many hours but it worked (found a cache bug in the logic that way).

So now for what I really think you were asking. How do you get from basic gates to a processor with an instruction set. Well basic gates AND, OR, NOT are really analog, they dont have a concept of time, you cant instantly change the voltage on the inputs and as you change that voltage the outputs start to change. gates are made of transistors and transistors are amplifiers, take the input multiply it by some number and allow that much current to flow on the other side. when we use them as logic gates we are actually oversaturating them, the input voltage is so high or low that the transistor can only drive max voltage or no voltage (current). basically the transitor is turned into a switch. long story short, there is no concept of time. In order to have an instruction set we have to have instructions happen in order we have to sequence through the program, we have to have a concept of now we are on this instruction, and in the next time slot we will work on that instruction. Just like the game of turning an amplifier into a switch you play similar games with using basic logic gates with a clock. the clocks come from magic crystals in a can (not worth getting into that here) that make voltages that turn on and off at some fixed rate. use that voltage in logic equations and you can start to sequence things.

very very briefly think of this truth table:

0 0 0 0 1 1 1 0 1 1 1 0 
0 + 0 = 1 0 + 1 = 1 1 + 0 = 1 1 + 1 = 10 (2 decimal) 

focusing on the lsbit only, a one bit adder, the truth table above describes a one bit adder. It also describes an XOR gate. One input is true or the other but not both.

To get more than one bit you have to look at carry bits carry in and carry out and you need a three bit adder two operands and a carry in with two outputs, carry out and the result bit. Once you have that three input two output adder you can cascade that as wide as you want. But this is still analog, the gates change instantly.

How you turn this analog adder into an ADD instruction, is you have some logic that looks at the instruction which is sitting as inputs to these gates you have arranged. some of the bits in the instruction say, this is an add instruction, some of the bits say that one operand is such and such register, maybe register 7, other bits may say the other operand is register 4. and depending on the architecture you might have another register defined in the instruction that says put the result in register 2. Now more logic sees this as I need register 7's contents to be routed to one input of the alu adder and register 4's inputs routed to the adder, and you route the output of the adder to register 2. Since the clock is part of this logic there is a period of time from the beginning of the clock period to the beginning of the next clock period where all the analog signals settle down and resolve the logic equation they are wired to do. Not unlike when you flip a light switch you change the lights state from say off to on. it takes a period of time for that light to warm up and basically get into a steady state. Not much different here.

There is some logic that says things like if this clock period is the execution phase of an AND instruction, then the next clock period I am going to save the result in the output register. at the same time I am going to fetch the next instruction etc. So for modern processors where the alu execution is often just one clock period because the analog logic can resolve the logic equations that fast. Older processors you had to count to some number leave the adder wired up, wait for x number of clock cycles for the adder logic to resolve, then sample the result from the output, feed the alu different inputs, wait x number of clock cycles for that to resolve, repeat forever or until the power goes off.

What do I mean by logic equations? just that, if you want you can think of it in terms of AND, OR, NOT gates. For each input bit to the alu adder circuit there is an equation, likely a very long equation which includes the clock, which includes the flip flops (individual/single memory bits) that contain the current instruction (plus the equations that feed each of those memory bits) and on and on. Take a single software function you have written, in the language you have written it, assuming this is a function that takes the inputs, performs a task and then finishes and returns a result. Think of all the possible combinations of inputs and what the different execution paths are through that function. You may have written it in some high level language but you could probably even in that language rewrite it to be more primitive and more primitive using many nested if-then-else structures. and go lower down into assembly language perhaps. Not unlike the equations I am talking about, the hardware programmer doesnt program in these long winded equations any more than you probably program in long winded if-then-else trees in assembly language when the programming language of choice saves so much of that. Just like the compiler you use turns your small source code into long winded assembly with lots of if-then-elses, there is a compiler that takes the hardware programming language code and turns it into a logic equations. Just like when you take one program written (to be portable) in C and compile it for different instruction sets the assembly and machine code is different, depending on where you plan to build your chip or what programmable chip if you use a cpld or fpga you are using the hardware programmers program resolves to different lists of logic equations based on the logic blocks available from that chip builder or programmable device.

Now going back to the first cpu (which today we would consider a microcontroller but then it was a cpu). You did all of the above on paper, and you didnt use hardware programming languages you actually wrote the logic equations, but you even more carefully chose the bit patterns for the instructions to make the number of gates and the wires to connect those gates as simple as practical. On paper, by hand you had to both create the long list of wired up logic gates and then you had to draw the actual components on a blown up version of the silicon mask. even today chips are done using what is similar to a photographic or silkscreen like process in laymans terms so you would take these blue prints if you will and shrink them then apply them to the silicon layers.

here again back in the day everyone had a better handle on assembly and not only might you end up doing some assembly programming you might not have had a text editor nor an assembler you might have had to write your first programs by hand on paper, then using a reference manual, by hand converted that to machine code, ones and zeros. On some of those computers you might have had to load the ram by flipping switches, flip the address bits, flip the data bits until they match the numbers on your paper, flip the clock bit up then down and you have loaded one memory location with one byte of one instruction of your program, repeat a few hundred times more and hope you make no mistakes.

In the same way that the first lets say C compiler was probably written in some other language then it became self hosting by being re-written in C and compiled by the first C compiler then again by itself. Then the C compilers were used to invent other programming languages which then became self-hosting. We invented operating systems and text editors that built upon themselves, and it is all black magic left for a few folks behind the curtain in the corner to do.

yes, very long winded, it is a big question needing years worth of study and experience to truly grasp. look at my lsasim, I dont claim to be an expert in anything, but it is an instruction set, there is both a simulator that executes the instruction written in C, and another implementation of the processor implemented in a hardware programming language that can be simulated using different tools. Plus a crude assembler and some other tools and such. Perhaps, by examining some of this, in particular the hardware programming language code it might close the gap on what I assume you were asking. If I have not closed that gap or have made a long winded tangent please let me know i will happly remove this answer (assuming I am able to I dont hang out at programmers exchange much).