How a Computer Works

I have, over the past few months, been writing a series of articles explaining the fundamentals of computer architecture, such that one should, in theory, be able to both comprehend and even construct a working computer from scratch, and generally de-mystify their workings. Additionally, I have been writing this in the form of just plain black and white text, to counter the "pivot to video" and create a lasting document that you can easily save a local copy of, or even print out for an actual physical copy you can read without the assistance of a computer at all. To further facilitate that, and to back up the contents, what follows is the entirety of that series, presented back to back, as one single document, black text on white. This is not at all intended to be read in one sitting, and for purposes of sharing and promoting this work, I would prefer you link people to the original tumblr posts at the cumbersome url of https://www.tumblr.com/secretgamergirl/735448675551215616/how-a-computer-works-part-1-components. But for archiving, offline reading, or searching the entire series at once, have this all-in-one version, last updated January 27th, 2024, containing parts 1-4.


Navigation: Part 1 (Components) - Part 2 (Logic and Memory) - Part 3 (Miniaturization and Standardization) - Part 4 (Binary Math) - Donate to keep this project (and its author) alive.


How a Computer Works - Part 1 (Components)

I am about to teach you on a real fundamental, connecting up electronic components level, how a computer actually works. Before I get into the meat of this though (you can just skip down below the fold if you don't care), here's the reasons I'm sitting doing so in this format:

So all that said, have a standard reminder that I am completely reliant on Patreon donations to survive, keep updating this blog, and ideally start getting some PCBs and chips and a nice oscilloscope to get that mystery project off the ground.

Electricity probably doesn't work like how you were taught (and my explanation shouldn't be trusted too far either).

I remember, growing up, hearing all sorts of things about electricity having this sort of magical ability to always find the shortest possible path to where it needs to get, flowing like water, and a bunch of other things that are kind of useful for explaining how a Faraday cage or a lightning rod works, and not conflicting with how simple electronics will have a battery and then a single line of wire going through like a switch and a light bulb or whatever back to the other end of the battery.

If you had this idea drilled into your head hard enough, you might end up thinking that if we have a wire hooked to the negative end of a battery stretching off to the east, and another wire stretching off to the east from the positive end, and we bridge between the two in several places with an LED or something soldered to both ends, only the westernmost one is going to light up, because hey, the shortest path is the one that turns off as quickly as possible to connect to the other side, right? Well turns out no, all three are going to light up, because that "shortest path" thing is a total misunderstanding.

Here's how it actually works, roughly. If you took basic high school chemistry, you learned about how the periodic table is set up, right? A given atom, normally, has whatever number of protons in the core, and the same number of electrons, whipping all over around it, being attracted to those protons but repelled by each other, and there's particular counts of electrons which are super chill with that arrangement so we put those elements in the same column as each other, and then as you count up from those, you get the elements between those either have some electrons that don't fit all tight packed in the tight orbit and just kinda hang out all wide and lonely and "want to" buddy up with another atom that has more room, up to the half full column that can kinda go either way, then as we approach the next happy number they "want to" have a little more company to get right to that cozy tight packed number, and when you have "extra" electrons and "missing" electrons other atoms kinda cozy up and share so they hit those good noble gas counts.

I'm sure real experts want to scream at me for both that and this, but this is basically how electricity works. You have a big pile of something at the "positive" end that's "missing electrons" (for the above reason or maybe actually ionized so they really aren't there), and a "negative" end that's got spares. Then you make wires out of stuff from those middle of the road elements that have awkward electron counts and don't mind buddying up (and also high melting points and some other handy qualities) and you hook those in there. And the electron clouds on all the atoms in the wire get kinda pulled towards the positive side because there's more room over there, but if they full on leave their nucleus needs more electron pals, so yeah neighbors get pulled over, and the whole wire connected to the positive bit ends up with a positive charge to it, and the whole wire on the negative bit is negatively charged, and so yeah, anywhere you bridge the gap between the two, the electrons are pretty stoked about balancing out these two big awkward compromises and they'll start conga lining over to balance things out, and while they're at it they'll light up lights or shake speakers or spin motors or activate electromagnets or whatever other rad things you've worked out how to make happen with a live electric current.

Insulators, Resistors, Waves, and Capacitors

Oh and we typically surround these wires made of things that are super happy about sharing electrons around with materials that are very much "I'm good, thanks," but this isn't an all or nothing system and there's stuff you can connect between the positive and negative ends of things that still pass the current along, but only so much so fast. We use those to make resistors, and those are handy because sometimes you don't want to put all the juice you have through something because it would damage it, and having a resistor anywhere along a path you're putting current through puts a cap on that flow, and also sometimes you might want a wire connected to positive or negative with a really strong resistor so it'll have SOME sort of default charge, but if we get a free(r) flowing connection attached to that wire somewhere else that opens sometimes, screw that little trickle going one way, we're leaning everyone the other way for now.

The other thing with electricity is is that the flow here isn't a basic yes/no thing. How enthusiastically those electrons are getting pulled depends on the difference in charge at the positive and negative ends, and also if you're running super long wires then even if they conduct real good, having all that space to spread along is going to kinda slow things to a trickle, AND the whole thing is kinda going to have some inherent bounciness to it both because we're dealing with electrons whipping and spinning all over and because, since it's a property that's actually useful for a lot of things we do with electricity, the power coming out of the wall has this intentional wobbly nature because we've actually got this ridiculous spinny thing going on that's constantly flip flopping which prong of the socket is positive and which is negative and point is we get these sine waves of strength by default, and they kinda flop over if we're going really far.

Of course there's also a lot of times when you really want to not have your current flow flickering on and off all the time, but hey fortunately one of the first neat little electronic components we ever worked out are capacitors... and look, I'm going to be straight with you. I don't really get capacitors, but the basic idea is you've got two wires that go to big wide plates, and between those you have something that doesn't conduct the electricity normally, but they're so close the electromagnetic fields are like vibing, and then if you disconnect them from the flow they were almost conducting and/or they get charged to their limit, they just can't deal with being so charged up and they'll bridge their own gap and let it out. So basically you give them electricity to hold onto for a bit then pass along, and various sizes of them are super handy if you want to have a delay between throwing a switch and having things start doing their thing, or keeping stuff going after you break a connection, or you make a little branching path where one branch connects all regular and the other goes through a capacitor, and the electricity which is coming in in little pulses effectively comes out as a relatively steady stream because every time it'd cut out the capacity lets its charge go.

We don't just have switches, we have potentiometers.

OK, so... all of the above is just sort of about having a current and maybe worrying about how strong it is, but other than explaining how you can just kinda have main power rails running all over, and just hook stuff across them all willy-nilly rather than being forced to put everything in one big line, but still, all you can do with that is turn the whole thing on and off by breaking the circuit. Incidentally, switches, buttons, keys, and anything else you use to control the behavior of any electronic device really are just physically touching loose wires together or pulling them apart... well wait no, not all, this is a good bit to know.

None of this is actually pass/fail, really, there's wave amplitudes and how big a difference we have between the all. So when you have like, a volume knob, that's a potentiometer, which is a simple little thing where you've got your wire, it's going through a resistor, and then we have another wire we're scraping back and forth along the resistor, using a knob, usually, and the idea is the current only has to go through X percent of the resistor to get to the wire you're moving, which proportionately reduces the resistance. So you have like a 20 volt current, you've got a resistor that'll drop that down to 5 or so, but then you move this other wire down along and you've got this whole dynamic range and you can fine tune it to 15 or 10 or whatever coming down that wire. And what's nice about this again, what's actually coming down the wire is this wobbily wave of current, it's not really just "on" or "off, and as you add resistance, the wobble stays the same, it's just the peaks and valleys get closer to being just flat. Which is great if you're making, say, a knob to control volume, or brightness, or anything you want variable intensity in really.

Hey hey, it's a relay!

Again, a lot of the earliest stuff people did with electronics was really dependent on that analog wobbly waveform angle. Particularly for reproducing sound, and particularly the signals of a telegraph. Those had to travel down wires for absurd distances, and as previously stated, when you do that the signal is going to eventually decay to nothing. But then someone came up with this really basic idea where every so often along those super long wires, you set something up that takes the old signal and uses it to start a new one. They called them relays, because you know, it's like a relay race.

If you know how an electromagnet works (something about the field generated when you coil a bunch of copper wire around an iron core and run an electric current through it), a relay is super simple. You've got an electromagnet in the first circuit you're running, presumably right by where it's going to hit the big charged endpoint, and that magnetically pulls a tab of metal that's acting as a switch on a new circuit. As long as you've got enough juice left to activate the magnet, you slam that switch and voom you've got all the voltage you can generate on the new line.

Relays don't get used too much in other stuff, being unpopular at the time for not being all analog and wobbily (slamming that switch back and forth IS going to be a very binary on or off sorta thing), and they make this loud clacking noise that's actually just super cool to hear in devices that do use them (pinball machines are one of the main surviving use cases I believe) but could be annoying in some cases. What's also neat is that they're a logical AND gate. That is, if you have current flowing into the magnet, AND you have current flowing into the new wire up to the switch, you have it flowing out through the far side of the switch, but if either of those isn't true, nothing happens. Logic gates, to get ahead of myself a bit, are kinda the whole thing with computers, but we still need the rest of them. So for these purposes, relays re only neat if it's the most power and space efficient AND gate you have access to.

Oh and come to think of it, there's no reason we need to have that magnet closing the circuit when it's doing its thing. We could have it closed by default and yank it open by the magnet. Hey, now we're inverting whatever we're getting on the first wire! Neat!

Relay computers clack too loud! Gimme vacuum tubes!

So... let's take a look at the other main thing people used electricity for before coming up with the whole computer thing, our old friend the light bulb! Now I already touched a bit on the whole wacky alternating current thing, and I think this is actually one of the cases that eventually lead to it being adopted so widely, but the earliest light bulbs tended to just use normal direct current, where again, you've got the positive end and the negative end, and we just take a little filament of whatever we have handy that glows when you run enough of a current through it, and we put that in a big glass bulb and pump out all the air we can, because if we don't, the oxygen in there is probably going to change that from glowing a bit to straight up catching on fire and burning immediately.

But, we have a new weird little problem, because of the physics behind that glowing. Making something hot, on a molecular level, is just kinda adding energy to the system so everything jitters around more violently, and if you get something hot enough that it glows, you're getting it all twitchy enough for tinier particles to just fly the hell off it. Specifically photons, that's the light bit, but also hey, remember, electrons are just kinda free moving and whipping all over looking for their naked proton pals... and hey, inside this big glass bulb, we've got that other end of the wire with the more positive charge to it. Why bother wandering up this whole coily filament when we're in a vacuum and there's nothing to get in the way if we just leap straight over that gap? So... they do that, and they're coming in fast and on elliptical approaches and all, so a bunch of electrons overshoot and smack into the glass on the far side, and now one side of every light bulb is getting all gross and burnt from that and turning all brown and we can't have that.

So again, part of the fix is we switched to alternating current so it's at least splitting those wild jumps up to either side, but before that, someone tried to solve this by just... kinda putting a backboard in there. Stick a big metal plate on the end of another wire in the bulb connected to a positive charge, and now OK, all those maverick electrons smack into here and aren't messing up the glass, but also hey, this is a neat little thing. Those electrons are making that hop because they're all hot and bothered. If we're not heating up the plate they're jumping to, and there's no real reason we'd want to, then if we had a negative signal over on that side... nothing would happen. Electrons aren't getting all antsy and jumping back.

So now we have a diode! The name comes because we have two (di-) electrodes (-ode) we care about in the bulb (we're just kind of ignoring the negative one), and it's a one way street for our circuit. That's useful for a lot of stuff, like not having electricity flow backwards through complex systems and mess things up, converting AC to DC (when it flips, current won't flow through the diode so we lop off the bottom of the wave, and hey, we can do that thing with capacitors to release their current during those cutoffs, and if we're clever we can get a pretty steady high).

More electrodes! More electrodes!

So a bit after someone worked out this whole vacuum tube diode thing, someone went hey, what if it was a triode? So, let's stick another electrode in there, and this one just kinda curves around in the middle, just kinda making a grate or a mesh grid, between our hot always flowing filament and that catch plate we're keeping positively charged when it's doing stuff. Well this works in a neat way. If there's a negative charge on it, it's going to be pushing back on those electrons jumping over, and if there's a positive charge on it, it's going to help pull those electrons over (it's all thin, so they're going to shoot right past it, especially if there's way more of a positive charge over on the plate... and here's the super cool part- This is an analog thing. If we have a relatively big negative charge, it's going to repel everything, if it's a relatively big positive, it's going to pull a ton across, if it's right in the middle, it's like it wasn't even in there, and you can have tiny charges for all the gradients in between.

We don't need a huge charge for any of this though, because we're just helping or hindering the big jump from the high voltage stuff, and huh, weren't we doing this whole weak current controlling a strong current thing before with the relay? We were! And this is doing the same thing! Except now we're doing it all analog style, not slapping switch with a magnet, and we can make those wavy currents peak higher or lower and cool, now we can have phone lines boost over long distances too, and make volume knobs, and all that good stuff.

The relay version of this had that cool trick though where you could flip the output. Can we still flip the output? We sure can, we just need some other toys in the mix. See we keep talking about positive charges and negative charges at the ends of our circuits, but these are relative things. I mentioned way back when how you can use resistors to throttle how much of a current we've got, so you can run two wires to that grid in the triode. One connects to a negative charge and the other positive, with resistors on both those lines, and a switch that can break the connection on the positive end. If the positive is disconnected, we've got a negative charge on the grid, since it's all we've got, but if we connect it, and the resistor to the negative end really limits flow, we're positive in the section the grid's in. And over on the side with the collecting plate, we branch off with another resistor setup so the negative charge on that side is normally the only viable connection for a positive, but when we flip the grid to positive, we're jumping across the gap in the vacuum tube, and that's a big open flow so we'll just take those electrons instead of the ones that have to squeeze through a tight resistor to get there.

That explanation is probably a bit hard to follow because I'm over here trying to explain it based on how the electrons are actually getting pulled around. In the world of electronics everyone decided to just pretend the flow is going the other way because it makes stuff easier to follow. So pretend we have magical positrons that go the other way and if they have nothing better to do they go down the path where we have all the fun stuff further down the circuit lighting lights and all that even though it's a tight squeeze through a resistor, because there's a yucky double negative in the triode and that's worse, but we have the switch rigged up to make that a nice positive go signal to the resistance free promised land with a bonus booster to cut across, so we're just gonna go that way when the grid signal's connected.

Oh and you can make other sorts of logic circuits or double up on them in a single tube if you add more grids and such, which we did for a while, but not really relevant these days.

Cool history lesson but I know there's no relays or vacuum tubes in my computer.

Right, so the above things are how we used to make computers, but they were super bulky, and you'd have to deal with how relays are super loud and kinda slow, and vacuum tubes need a big power draw and get hot. What we use instead of either of those these days are transistors. See after spending a good number of years working out all this circuit flow stuff with vacuum tubes we eventually focused on how the real important thing in all of this is how with the right materials you can make a little juncture where current flows between a positive and negative charge if a third wire going in there is also positively charged, but if it's negatively charged we're pulling over. And turns out there is a WAY more efficient way of doing that if you take a chunk of good ol' middle of the electron road silicon, and just kinda lightly paint the side of it with just the tiniest amount of positive leaning and negative leaning elements on the sides.

Really transistors don't require understanding anything new past the large number of topics already covered here, they're just more compact about it. Positive leaning bit, negative leaning bit, wildcard in the middle, like a vacuum tube. Based on the concepts of pulling electrons around from chemistry, like a circuit in general. The control wire in the middle kinda works in just a pass-fail sort of way, like a relay. They're just really nice compared to the older alternatives because they don't make noise or have moving parts to wear down, you don't have to run enough current through them for metal to start glowing and the whole room to heat up, and you can make them small. Absurdly small. Like... need an electron microscope to see them small.

And of course you can also make an inverter super tiny like that, and a diode (while you're at it you can use special materials or phosphors to make them light emitting, go LEDs!) and resistors can get pretty damn small if you just use less of a more resistant material, capacitors I think have a limit to how tiny you can get, practically, but yeah, you now know enough of the basic fundamentals of how computers work to throw some logic gates together. We've covered how a relay, triode, or transistor function as an AND gate. An OR gate is super easy, you just stick diodes on two wires so you don't have messy backflow then connect them together and lead off there. If you can get your head around wiring up an inverter (AKA NOT), hey, stick one after an AND to get a NAND, or an OR to get a NOR. You can work out XOR and XNOR from there right? Just build 4 NANDs, pass input A into gates 1 and 2, B into 2 and 3, 2's output into 1 and 3, 1 and 3's output into 4 for a XOR, use NORs instead for a XNOR. That's all of them right? So now just build a ton of those and arrange them into a computer. It's all logic and math from there.

Oh right. It's... an absurd amount of logic and math, and I can only fit so many words in a blog post. So we'll have to go all...


Navigation: Part 1 (Components) - Part 2 (Logic and Memory) - Part 3 (Miniaturization and Standardization) - Part 4 (Binary Math) - Donate to keep this project (and its author) alive.


How a Computer Works - Part 2 (Logic and Memory)

For those coming in late, I am writing a text-only explainer of how a computer works, starting from the absolute basics of running a current through various electronic components. We covered that much, and the reasons I'm doing this, back in part 1, where we also sort of left off on a glib little cliffhanger about how once you have logic gates, you're there, right? Well the thing of it is, getting to a point where you can easily make all the basic logic gates actually was really huge, historically, because the next big step to making a computer was already handled by weird math nerds hundreds of years before the physical hardware to make a computer was properly available.

As far back as 1705, math nerds were publishing papers on binary math. That was based on nerding out over the I Ching if you really want to trace things back, and the first time anyone really sat down and tried to build purely mechanical computers was back in the 1800s they had this all figured out to the point where I'm looking at a diagram from Ada Lovelace in 1842 that definitely covers more than I'm going to get to here. So, let's start catching up there. First though, as always, I have to remind you this blog is basically my job right now, and I'm dependent on some percentage of the people reading these posts to go throw me some money on patreon to continue to be alive, so I can write stuff like this.

Logic Gates

We talked earlier about the actual physical components needed to physically build a computer, at least of the sorts we've been using for the past hundred years or so. But really, all you need to make a computer is logic gates, a way to plug values in them, and a way for them to show some sort of output. We're doing that with clever tricks to make conditional electrical connections, but you can use anything, really. Clockwork, falling water, migratory crabs stepping on pressure plates, groups of people agreeing to poke each other's shoulders, or just mathing it all out on paper. All you really need is a consistent way to set up all the fundamental logic gates... and math nerds will note that there's a couple you can build all the others from if you're in a real bind.

So let's go over them again real quick. All of these take two inputs and give an output. We can call those inputs "on or off" "flowing or still" "yes or no" "true or false" or of course the popular "1 or 0." In terms of a computer, as we build them, the first option for the inputs is "if I follow the electrons getting pulled this way all the way down, I'm going to hit a big relatively positive charge" and our output is going to (hopefully) lead towards a big negative charge so we'll have a complete circuit and lights will like and all that. Again, we are going to just ignore how the actual movement of electrons is totally doing the opposite of what we're calling "input" and "output" here. You could build logic components where things really move the way that makes intuitive sense but... we didn't, and we're stuck with that.

So the first and easiest gate we have is the OR gate. An OR gate takes its two inputs, and it's looking for at least one that's on/yes/true/1 whatever you want to call it, and if it has that, it's going to pass it along as it's output. So like, we've got two wires coming in, if either or both is connected to a positive charge, we're passing that connection along, maybe through an LED so it can light up and she how cool we are on the way to a negative charge, but if neither connects to a positive, we're not lighting our light, so, we're passing along off/no/false/0. Simple. Gonna stick to just saying 1 and 0 past here for reasons of laziness. So, 0+0=0, 1+0=1, etc. But maybe we don't use + because that gets confusing with actual addition. So we'll just say
0 or 0 outputs 0.
1 or 0 outputs 1.
0 or 1 outputs 1.
1 or 1 outputs 1.

Then we've got AND gates. Here we pass along our 1 or whatever you want to call it if and only if both our inputs are 1. Just one or the other isn't going to cut it, it's gotta be both.
0 & 0 outputs 0.
1 & 0 outputs 0.
0 & 1 outputs 0.
1 & 1 outputs 1.

Then we've got the oddball of XOR, or exclusive or. If it wasn't a bunch of STEM people naming these we'd probably say like, "either" vs. "and/or" but this is for when we want exactly one of our inputs to be 1, not both. So,
0 xor 0 outputs 0.
1 xor 0 outputs 1.
0 xor 1 outputs 1.
1 xor 1 outputs 0.

Then we've got the evil twins of those, NOR, NAND, and XNOR. These give the exact opposite outputs their N-less cousins do.
0 nor 0 outputs 1.
1 nor 0 outputs 0.
0 nor 1 outputs 0.
1 nor 1 outputs 0.
NAND outputs a 1 any time it isn't getting two 1s.
XNOR is particularly badly named since it's outputting a 1 if and only if its inputs are the same as each other and "SAME" would both make more sense and have the same number of characters, but conventions are what they are.

Anyway, point is, if we can construct these six things and string them together, we have all our bases covered on every sort of behavior we could possibly want in terms of taking two inputs and spitting out an output, but the only immediately obvious use case there is if you want to hook multiple light switches up to control a single bulb in various ways and really screw with people trying to figure out what's up when they enter a room and flip a switch, right?

Well, we could also throw some diodes in and branch off from each switch into multiple gates controlling multiple lights with different switch combinations with each bulb following different logic. Especially when you remember that we can use the output of any given gate as the input to another, but what about something actually useful?

The Incredible Power of the S-R Latch

So... here's an actually useful thing we can do. We can store a value for later by looping it in with itself. Check it out, they call this sucker a set-reset, or S-R Latch, and all we need is to take two NOR gates, branch a wire off the output of each, and cross that over to be an input of the other NOR gate. So gate A is taking what we're going to call our Set wire as one input, and gate B's output as the other, and gate 2's going to take what we're going to call our Reset wire, and the output from gate 1. So uh, what does that do?

Well, let's do the math here. If we're just feeding 0s into S and R, then gate A is getting a 0 and let's say, for now, a 0. It's a NOR gate, so feeding in two 0s outputs a 1. So... gate B is getting a 0 from R, and 1 from gate A, so yeah, it was in fact putting out a 0. Hooray, that guess was right, nothing funky's going on here.

Now just for kicks, let's hold down a button for a second to connect to something and make S a 1. OK, NOR gate A now has a 1 and a 0, so it's outputting a 0... and gate B is taking that 0 and the 0 it had from R, so, it's a 1 now, and that 1 goes back to gate A, so gate A has two 1s, which doesn't change how it's putting out a 0 so, no further changes to keep track of there.

Well cool, let me just let go of this button then. Oh well NOW instead of getting a 1 and a 1, NOR gate A is getting a 0 and a 1... but whatever, it's still outputting 0. It's gonna keep outputting a 0 unless it has two 1s after all, that's what a NOR gate does, and uh... huh. It's kinda stuck now isn't it. Doesn't matter if S is a 0 or a 1, gate A is stuck outputting a 0 and gate B is stuck outputting a 1! Gate B is the output we care about here if that wasn't clear. We've rigged it up so that yeah, if there's ever a point where S gets pulled positive/set true/set to 1/turns on, that output gets stuck as the same, and the only way to get it back to a 0 is to go and put a 1 into R. The whole thing's symmetrical so we shouldn't have to step through all this. A ends up outputting 1, so B ends up outputting 0, and R no longer matters.

Well that's all pretty cool, but hey, just to be thorough, what happens if we put a 1 on both S and R at the same time? Well, if A was outputting 1 B gets two 1s, so it outputs a 0 to A, so that's a 1 and 0 for A so it gives B a 0 so B gives A a 1 so A gives B a 0 so... oh we broke it didn't we! In pure logic land this is some sort of paradox. In real life there aren't really any 0s and 1s just wires at different charges and components that don't work right when those aren't steadily in certain ranges so... it resolves SOMEHOW but uh... let's maybe avoid this whole situation.

The S-R latch sucks! Let's make a D flip-flop!

Well one simple thing we can do is have another input wire in the mix, we call it Enable, and we AND it with S and R before we send them into the latch so a 1 can only get in there if we give it the green light. That seems like a good sort of design to use with everything we do with these gates actually. When a wire's going towards a cool setup let's AND it with an enable signal that has to have a 1 or else it's going to stay giving a 0, keep it from getting up to anything. Maybe more than one enabler even.

You know what else would be nice though? If we didn't need two to send a 1 down one wire to store a 1, and a 1 down another to store a 0. If we're doing this whole Enable signal thing and making sure nothing happens otherwise, what if we just like, branched off our set wire, ran that through an inverter, and put that into R? That'd just make it so we can have some sort of Data line with whatever value on it, and whenever we turn on Enable it gets stored, right? Now it's a D latch.

Plus we're never going to have a 1 on both of those inputs, right? Well see, this is one of those situations where we have to deal with the whole perfect logic of 0s and 1s thing doesn't quit fit with reality. It takes SOME time for that inverter to invert (and technically some time for signals to travel along these wires, but less when there aren't neat components). So turns out if you have a wire branch out, send one end through an inverter, then plug both into say an AND gate, every time you change what's on that wire there's going to be this super short little burst of a 1 coming out of that AND. That's very inconvenient for us right now, but it's a cool hacky trick to have in our pocket if we ever want to time something to just do something real quick at the very moment the voltage on a wire shifts from low to high.

We can pull the same sort of trick with a capacitor then a resistor going to a low. Also I should never say "at the very moment." There's values on all these components and mathematical relationships, and when you're really building these things and needing precision timing (because with a computer we want things happening in a very specific order, but VERY FAST) you need to actually do that math. And you know, it's actually very handy for out purposes of avoiding that 1 on both inputs issue if we do something like that, so OK, new design from the top here just for a clean visualization:

We've got a line with our clock signal (I'll get to that in a moment). We run that through the capacitor-resistor-to-negative setup to get this quick little pulse every time the clock line ticks up to positive. Now we split that out to AND gates A and B. We've also got a Data line, it's gonna have whatever. We also split that off, one line going to gate A, the other going through an inverter and into gate B. AND gate A's output goes into an input for NOR gate A, along with the output from NOR gate B. AND gate B's output goes into NOR gate B along with NOR gate A's output. You can wrap all that up in a little box now and so long as we remember to make changes to that data line out of sink with the rising edge of our clock signal, this all works super great. Change the data, pulse the clock, data gets latched in to our new thing we're calling a D flip-flop. We've got a wire coming out the other end from NOR gate B's output that's just gonna hold whatever value until the next time the clock pulses and we check our input again, and we can use AND gates to skip the check if we don't want to read a new value in, or don't want to send that new value off somewhere else.

What's this about a clock?

OK, so, we want our computer to do stuff. Our computer can really only ever do one thing at a time, because the only thing it can really do is latch values into setups like we have above and messing with two inputs being positive can make logic gates go screwy. So like we already did with the D flip-flop we can totally time things along the rising and falling edges of a charge flipping constantly between high and low values, and time everything off that. And we don't even need to do any extra work for that because it JUST SO HAPPENS that smart people worked this all out before any of your fancy transistors were even a thing and the electricity comes right out of your wall in this neat sine wave pattern that peaks 50 or 60 times a second depending what part of the world you're in on the assumption that any electronics that need any sort of timing can work with that.

And I mean that actually is true but screw that we're building a computer. We rig something up to convert that to DC and then we pump that into a chunk of quartz or something. See there's this thing called the piezoelectric effect where the structure of certain kinds of crystals makes them change shape when you run electricity through them then snap back and release it. So you grow a quartz crystal exactly how you want it and lock it up in a little box and run electricity through it, and it'll start twitching away in there at a speed dependent on the voltage, and you just hook in another wire and you get these nice steady alternating high low pulses. Or something close enough to that anyway.

And we're "reading values in" and "passing them along" how?

So we talked about enable lines before right? Like with our D flip-flop there, where we ended up only committing changes to what was latched in when there was a clock pulse? It's easy enough to just have more AND gates for more conditions. Like let's say for the sake of argument and convenient numbers we set up, oh, 8 of those D flip-flops. They each have their own data line, they're sharing a clock, and we're throwing another AND in on the clock line to a shared Read Enable line. We have a 0 there, nothing's going to happen. We have a 1, things will happen when the clock pulses, specifically we latch in whatever's on the data line. Now let's also have an Output Enable line, and we'll AND that in with the output of every flip-flop, and in the interest of being lazy, let's have each flip-flop's output just loop back and connect to it's own data input line... maybe have some diodes in there so it's a one way loop, we probably have some in here but you know, best practices.

Anyway let's take some really really long and we'll call these our bus lines. Each of our 8 data line connects to a bus line. Elsewhere in the computer, anywhere else we're going to have data sitting around in fact, those also connect to these 8 bus lines. We might have a set of 8 flip-flops to hold some set of data for just a little bit, or for a long time, or connected to some switches or buttons, or just some LEDs or other kind of output, whatever. Everything connects to the bus, and everything has enable lines to pull in whatever values are on the bus and to push out whatever onto the bus. We definitely don't want have more than one thing trying to push stuff out at a time, we also definitely want to do that whole pull-down resistor thing to make sure everything on the bus defaults to 0 if we aren't feeding in a 1, and we probably don't want to be reading from the bus at the exact moment the data on it is changing as a best practices thing, ideally, but oh, quick sidetrack there.

Let's say we have some a clump of these flipflops, we call that register A, we have another we call register B, and we build some little module that treats what's in both of those like an 8 bit number and adds them together, we call the output of that our sum register. And say we get lazy, we leave the sum register's output enabled, A's input enabled, and we've just go 00000001 stored in B. Now every time the clock pulses and updates our math function, the sum increases by 1, feeds right back into A, and the whole thing ends up counting up at the speed of the clock. There's simpler ways to make something count up, but, hey it's a thing you can do. And a thing you probably don't want to do, so don't leave those pins enabled all over.

In fact, if you really wanted to be safe, you'd maybe want to just like put a pull-down on every enable line and have them all lead to a big control zone where you're just sitting there holding a live positive wire in your hand and touching it to whatever one thing you want enabled at a time. Seems like a pain though.

Why not use an addressing system?

OK, so how about this? What if we organize everything so we've got like, our big longterm memory area, and we have a bunch of these registers of a flip-flop for every line on the bus, and then instead of a simple enable pin for each register, we have a unique little access code for each? Let's have oh... 4 dedicated data lines just for managing these, right? So one of these is oh, memory address 1001. So we just have lines carrying those values, and we XNOR those with our address lookup lines, then we AND all those together, and use THAT for our enable.

I already covered how XNOR makes way more sense if we just call it SAME instead, right? We're only going to pass a 1 to the set of ANDs at the end if we've got a 1 and we're getting another 1, or we've got a 0 and we're getting another 0, and if we pass a single 0 to the ANDs, they clam up and don't enable things, but if we pass the whole value, we're in.

We can use a similar address coding thing to activate cool little function models too. Like that thing I mentioned in the tangent for doing addition? We have some operation code to enable to output on whatever memory register has a number we care about and make register A in that math module read it in. Another to plug a value into B. Another to output the sum to some memory address we want to store it in. We can set it up so these get checked for if someone sets toggle switches corresponding to the code and locks it in with an enable button, or hey, we can set up one of those big blocks of addressed memory, lay the whole sequence of actions we want out in that in sequential addresses, and then just have some function that adds 1 to itself every clock cycle as a line counter, and enable the outputs of our program counter to dump out those stored commands out to our opcode and memory address lines when their number gets called at the deli, as it were. Hey, make one of the commands write to the program stepper and it can even skip around.

And... there you go. That's how a computer works. There's more stuff I could, and probably could cover, like how to rig up a program counter and an addition module and maybe some sort of real output display. Not to mention how to actually, practically, compress all of this into a reasonable space so you don't just have a few thousand transistors soldered together in a giant tangle with the nightmare of keeping contacts from touching. I'll probably get to at least some of that in some future part 3. In the meantime, I learned most of what I'm sharing here by actually for real building my own computer using a kit and series of instructional videos from eater.net which is just the homepage of some cool guy who, yeah, posts long explainer videos on this stuff and sells electronics kits you can follow along with. I don't have any advertising deal going here or anything, he's just genuinely a good extra source of info for getting your head around this stuff. And again, if you thought this was a cool read or you're just feeling generous, throw me a little money maybe?


Navigation: Part 1 (Components) - Part 2 (Logic and Memory) - Part 3 (Miniaturization and Standardization) - Part 4 (Binary Math) - Donate to keep this project (and its author) alive.


How a Computer Works - Part 3 (Miniaturization and Standardization)

For anyone just joining in, I'm writing a series of posts explaining perhaps haphazardly all there is to know about how a computer works, from the most basic fundamental circuitry components to whatever level of higher functionality I eventually get to. As explained in the first post on this subject, I am doing this just in pure text, so that if you are inclined you can straight up print these posts out or narrate them onto some audio tape or whatever and have full access to them should every computer in the world suddenly collapse into a pile of dust or something. Part 1 mainly covered the basic mechanical principles of circuitry and how to physically construct a logic gate. Part 2 covered logic gates in detail and how to use them to create a basic working architecture for a general purpose computer. Today we're going to be talking more about what you're looking at when you crack a machine open so you can make sense of all the important fiddly bits and have maybe a starting point on how to troubleshoot things with a multimeter or something.

Before getting into it though, I do have to shake my little donation can again and remind you that I do not know how I am going to get through the winter without becoming homeless, so if this is valuable to you, I'd appreciate some help.

Boards of Bread and Printed Circuits

With the things I've explained so far, you could totally build a computer right now, but it'd be a bit messy. You can totally buy resistors, transistors, capacitors, and diodes by the bagful for basically nothing, and cheap rolls of insulated wire, but there's all these long exposed pins to cut short and soldering things in mid-air is a messy nightmare and you'd just have this big tangle of wires in a bag or something that would almost certainly short out on you. So let's look into ways to organize stuff a little.

If you start playing around with electronics on your own, one of the first things you want to hook yourself up with besides raw components and wires is a breadboard or 12. And if you're watching people explain these things with visual aids, you'll also see a lot of them, so it's good to know exactly what they are and how they work. Your standard breadboard is a brick of plastic with a bunch of little holes in it. Incidentally, the name comes from how the first ones were literally just named after the wooden cutting boards for slicing bread people recycled to make them. Inside these holes there's some pinching bits of conductive metal which connect to each other in a particular way (pretty sure you can just see the strips that connect one if you pry the bottom off), so you can just jam a thing wire or prong into a hole, have it held in place, and make a connection to every other hole its connected to on the other side.

There is a ton of standardization to all of this. The holes should always be 0.1 inches apart (2.54 mm) and split into two big grids. Everyone I've ever seen has 63 rows, each with 5 holes labeled A-E, a shallow channel through the middle of the board, and then another 5, F-J, and we generally have numbers printed every 5 rows. Down underneath, for any given row, the set of 5 pins on each side of the channel are connected. So, holes 1A, 1B, 1C, 1D, and 1E are all connected to each other, and nothing else. Holes 1F, 1G, 1H, 1I, and 1J are also connected to each other. There's no connection though between 1E and 1F, or 1A and 2A.

Most breadboards will also have a couple of "power rails" along the sides. These are just going to be labeled with a long red line and +, and a long blue or black line and -, and have holes in 2x5 blocks staggered out. With these, all 25 or 50 or whatever holes near the red + line connect with each other, and all the ones near the black line connect with each other. The gaps every 5 holes don't serve any purpose beyond looking different enough from the big grid so you hopefully don't mix it up and forget that these ones all connect down the length, and not in in little clumps across the width like everything else. The idea, for the sake of convention, is you plug a wire connected directly to the positive side of your battery or DC adapter or whatever into any red line hole, the negative side to any blue/black hole, and then tada, you can make a circuit just by plugging a wire in from red to a normal grid line, whatever bits you want span from that grid line to another, and eventually you connect the far end back anywhere on the black/blue line.

With a nice circuit board, there's also little snap-together pegs along the sides, and the power rails are just snapped on with those. So you can just kinda cut through the backing with a knife or some scissors, snap those off, connect multiple boards together without redundant power rails in the middle, and then just have these nice spare long lines of linked sockets. In the computer I'm building on these, I'm just using spare power rails for the bus. Oh and the big grooved channel down the middle also has a purpose. Bigger electronic components, like our good good friend the integrated circuit, are generally designed to be exactly wide enough (or more, but by a multiple of 0.1 inches) to straddle that groove as you plug their legs into the wires on either side, so they nicely fit into a breadboard, and there's a handy gap to slide something under and pry them off later on.

Typically though, you don't see breadboards inside a computer, or anything else. They're super handy for tinkering around and designing stuff, but for final builds, you want something more permanent. Usually, that's a printed circuit board, or PCB. This is pretty much what everyone's going to picture when they think about the guts of a computer. A big hard (usually) green board with a bunch of intricate lines, or "traces" running all over made of (usually) copper. And maybe with some metal ringed holes punched all the way through (they call those vias). These tend to look really complicated and maybe even a little magical, but they're honestly they're just pre-placed wires with a sense of style.

Most of the material of the board is insulated. The copper traces conduct real well, and manufacturers have done the math on just how close together they can be run without connecting to each other in places you don't want. The holes that go all the way through are for either plugging other bits in that tend to come with long legs you maybe want to keep intact, or just ways to run a trace through to the other side, where we often have traces on the back too to maximize our space. Most of what makes them look all cool and magical is how the traces run as close packed as possible to conserve space, and tend to only turn at 45 degree angles, which is just an artifact of how the machinery used to etch them out sued to be iffy about anything else.

So tada, you have all your wires pre-stuck to a nice sturdy board, and maybe even have labels printed right on there for where you solder all the various components to finish the thing. Oh and when you hear people talk about like, motherboards and daughterboards? The big main board you have for everything is a motherboard. Sometimes you need more than that, so you make smaller ones, and connect them up ether with some soldering or cartridge style with end-pins sliding snugly into sockets, and those we call daughterboards.

Integrated Circuits, or as they're also known, "chips"

The last thing you're likely to find if you crack open a computer, or just about any other electronic device that isn't super old or super super simple, are integrated circuits. Generally these are think black plastic bars that look like you'd maybe try to awkardly use them to spread cheese or peanutbutter on crackers in a prepacked snack or something, with rows of tiny little legs that running along either side. Kinda makes them look like little toy bugs or something. Sometimes they're square with pins along every edge, because sometimes you need a lot of pins. These are integrated circuits, or microchips, or just chips, and wow are they handy.

Sometime back in the 60s when people were really getting their heads around just how ridiculously small they could make electronic components and still have them work, we started to quite rapidly move towards a point where the big concern was no longer "can we shrink all this stuff down to a manageable size" and more "we are shrinking everything down to such an absurdly tiny size that we need to pack it all up in some kind of basically indestructible package, while still being able to interact with it."

So, yeah, we worked out a really solid standard there. I kinda wish I could find more on how it was set or what sort of plastic was used, but you take your absurdly shrunken down complex circuit for doing whatever. You run the teensiest tiniest wires you can out from it that thicken up at the ends into standard toothy prongs you can sink into a breadboard or a PCB with that standardized pin spacing, and you coat it all in this black plastic so firmly enveloping it that nothing can move around inside or get broken, hopefully.

And honestly, in my opinion, this is all TOO standardized. The only real visible difference between any two given integrated circuits is how many legs they have, and even those tend to come to some pretty standard numbers. They're always the same size shape and color, they all have the same convention of having a little indented notch on one side so you know which end is which, and they all seem to use just the worst ink in the world to print a block of numbers on the back with their manufacturer, date of assembly, a catalog number, and some other random stuff.

For real if there's any real comprehensive standard for what's printing on these, I can't for the life of me find it. All I know is, SOMEWHERE, you've got a 2 or 3 letter code for every manufacturer, a number for the chip, and a 4 digit date code with the last 2 digits of the year, and which week of that year it was. These three things can be in any order, other things can also be on there, probably with zero spacing, and usually printed in ink that wipes away like immediately or at least is only readable under really direct light, it sucks.

Once you know what a chip is though and look up the datasheet for it, you should have all sorts of handy info on what's inside, and just need to know what every leg is for. For that, you find which end has a notch in it, that's the left side, sometimes there's also a little dot in the lower left corner, and hopefully the label is printed in alignment with that. From there, the bottom left leg is pin 1, and then you count counterclockwise around the whole chip. You're basically always going to have positive and negative power pins, past that anything goes. You can cram a whole computer into a single chip, yo can have someone just put like 4 NAND gates on a chip for convenience, whatever.

OK, but how do they make them so small?

OK, so, mostly a circuit we're going to want to shrink down and put on a chip is just gonna be a big pile of logic gates, we can make our logic gates just using transistors, and we can make transistors just by chemically treating some silicon. So we just need SUPER flat sheets of treated silicon, along with some little strands of capacitive/resistive/insulating material here and there, and a few vertically oriented bits of conductive metal to pass signals up and down as we layer these together. Then we just need to etch them out, real real small and tight.

And we can do that etching at like, basically infinite resolution it turns out. It just so happens we have access to special acids that eat through the materials we need them to eat through, but that only work when they're being directly hit with fairly intense UV light. And a thing about light is when you have say, a big cut out pattern that you hold between a light and a surface, it casts a shadow on it... and the scaling of that shadow depends entirely on the distances between the light, the pattern, and the surface. So if you're super careful calibrating everything, you can etch a pattern into something at a scale where the main limiting factors become stuff like how many molecules thick things have to be to hold their shape. Seriously, they use electron microscopes to inspect builds because that's the level of tininess we have achieved.

So yeah, you etch your layers of various materials out with shadow masks and UV acid, you stack them up, you somehow align microscopic pins to hold them together and then you coat the whole mess in plastic forever. Tada. Anything you want in a little chip.

ROMs, maybe with various letters in front

So there's a bunch of standard generally useful things people put into ICs, but also with a computer you generally want some real bespoke stored values with a lookup table where you'll keep, say, a program to be run by feeding whatever's inside out to the bus line by line. For that we use a chip we call Read Only Memory, or ROM. Nothing super special there, just... hard wire in the values you need when you manufacture it. Manufacturing these chips though is kind of a lot, with the exacting calibrations and the acid and the clean rooms and all. Can't we have some sort of Programmable ROM? Well sure, just like build it so that all the values are 1, and build a special little thing that feeds more voltage through than it can handle and physically destroy the fuse for everything you don't want to be a 1.

OK that's still kind of a serious commitment. What if I want to reuse this later? Oh, so you want some sort of Erasable PROM? OK someone came up with a funky setting where you overload and blow out the fuses but then if you expose the guts of the chip to direct UV light through this little window, everything should reform back to 1. Just like, throw a sticker on there when you don't want to erase it. Well great, but can we maybe not have me desolder it and take it out to put under a lamp? Oh la de da! You need Electronically Erasable PROMs? EEPROMs? I guess we can make THAT work, somehow. They're still gonna be slow to write to though, can't have anything. I mean, not unless we invented like, flash memory. Which somehow does all this at speeds where you can use it for long term storage without it being a pain. So that's just kinda the thing we have now. Sorry I don't quite get the principles behind it enough to summarize. Something about floating components and needing less voltage or whatever. Apparently you sacrifice some read speed next to older options but hey, usable rewritable long term storage you just plug in, no jumping through extra hoops.

So OK. I think that's everything I can explain without biting the bullet and explaining ALUs and such. Well, there's keyboards (they're just buttons connecting input lines), monitors (these days, LEDs wired up in big grids), and mice (there's spokes in wheels that click X times or cameras checking the offset values of dust on your desk or whatnot).

Maybe throw me some money before we move on?


Navigation: Part 1 (Components) - Part 2 (Logic and Memory) - Part 3 (Miniaturization and Standardization) - Part 4 (Binary Math) - Donate to keep this project (and its author) alive.


How a Computer Works - Part 4 (Binary Math)

This is the 4th part in a series of posts explaining how computers work such that you can build your own from just wires and spare electronics (or hell, Minecraft redstone signals, a carefully balanced water fountain, anything you can build logic from really). The series starts in this post, the most recent entry before this was part 3, but the only REALLY required reading for this one should be part 2. Get that knowledge in your brain so this next bit can make sense to you.

Also, I'm basically teaching a pretty in-depth computer science class here for free out of the goodness of my heart, so if you have the cash to spare, maybe consider throwing a little money my way so I can keep surviving and doing stuff like this?

Our focus for today's lesson is going to be actually designing one of these modules we have hooked up to the bus to actually do stuff with any data we pass into it. As I've mentioned a few times, all of this stuff we're passing along can be thought of in a lot of different ways. Completing a circuit when one tracing wires out connects to a positive charge and another a negative means the same thing as a gate saying true, will turn a light tied in there on, we can call it a 1 in our abstract computery talk, or several other things, but we're dong math today so let's think about numbers.

Let's think in Binary

So I think I've referenced binary numbers a few times in a really hand-wavey sort of way, but it's good to stop and make sure we all get the concept thoroughly. Normally, when we think about numbers, we're using our good pals the Arabic numerals- 0 1 2 3 4 5 6 7 8 9. We just decided to make unique little squiggles to represent these first ten numbers if we include 0, and then if we add together 9+1, we're out of symbols, so we start a new column, put a 1 in it, and reset to 0 in the one we're in. So, 9+1=10. We call this "base ten math" because ten is where we have to start that new column... but really, we kinda just picked ten out of a hat for this? Presumably it's because most of us have ten fingers.

Maybe if we all had hands like typical American cartoon characters, we'd only have made eight unique symbols. 0 1 2 3 4 5 6 and 7. Add 1 to 7 and we start a new column there instead of after coming up with symbols for those fingers we don't have. In base eight math, 7+1=10. It's a smaller group we're dedicating that next numeral over to, but you can see how that works, right?

Or hey, what if the first person to start counting stuff on their fingers just thought about it differently. You can totally hold up 0 fingers. So really on just one hand you can easily go 0 1 2 3 4 5. Well, what if we just use our other hand past there? Every time we run out of fingers on our right hand, we reset it to zero and add one on our left. It's base six math in this example but hey with just our hands we can display any number from 0 to a base six 55! Which in base ten would be, let's see, 5x6+5, so, yeah, any number from 0 to 35, but that's still pretty good. Converting it into base six is kind of a pain since you've gotta stop and do the multiplication, but if we all just kinda thought in base six we wouldn't need to convert at all.

And hey, what if we really thought big here? Instead of using one hand for the next column of numbers, we could just treat every finger as a column on its own. Holding some of the required groupings of fingers up can kinda give you a hand cramp, but hey we've got ten columns that can hold a 0 or a 1, so we can count all the way up from 0 to 1111111111! Or uh, in base ten, 1023. Still a really impressive number though! Just explaining this to you I've upped how how you can count on your fingers by more than a hundred times. You're welcome! Sorry about the hand cramps. We're not looking into binary math for the sake of saving fingers though, we're doing it because we're designing logic circuits and doing math on the assumption that the only symbols we have to count with are 0 and 1. Anyway, just so we're on the same page, let's count up from 0 in binary for a while here:

0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1111, 10000.

You can follow along with the pattern right? And if you're curious what that'd be all standard base 10 style, let's count through that same number of... numbers that way.

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16. I made some of these bold to make it a little easier to count along. It's the ones where we're adding a new column in binary, and hey look, it's all the powers of 2. If you have to convert in your head, that makes it easier.

Binary Addition

So let's try thinking in JUST binary now and do some basic math. Before we get into the double-digits- Wait no, if we're pedantic, di- is the prefix for ten things so we shouldn't be saying "digits," we're in base two, so, bi- so... the double bits, I guess), we're just got:

0+0=0. 1+0=1. 0+1=1. 1+1=10

Hey, wait. does that pattern look familiar to you? Like we had to go to a second bit for 1+1, but just ignore that for a moment and look at the lowest one. Humor me. We saw this same pattern in part 2!

0 xor 0 outputs 0. 1 xor 0 outputs 1. 0 xor 1 outputs 1. 1 xor 1 outputs 0.

Oh damn. So if we want to add two bits of data, we just XOR them. All we have to worry about is the spill-over into the next column. Well.. hell, let's see what this looks like if we're looking at two columns here.

00+00=00. 01+00=01. 00+01=01. 01+01=10.

If we just look at the "1s column" digit, yeah, XOR works. And is there a pattern for the "10s column?" Well, it's a 0 for everything except when we go 1+1... we had a logic circuit for that too though, right? Yeah, good ol' AND. Only outputs 1 if both value A and value B it's looking at are both 1.

So OK. We rig up a circuit that has a XOR gate and an AND gate. We feed the first number we want to add into both of these gates, and we can display our answer as a two bit number, with what the AND spits out on the left, and the one the XOR spits out on the right. BAM. We are masters of addition... so long as the highest numbers we want to add together are 1+1. We uh... we should probably try to improve upon that. Also we've got this whole structure to the whole computer where we've got these registers feeding in and out of a bus with a fixed number of data bits on it, kinda would be nice if the number of bits going back out to our bus was the same as the number coming in to our addition circuit... and like, yeah, that's kind of an impossible goal since it's always possible when adding two numbers the same length that you need an extra column to display the answer, but you know, if the first bit of at least one of the numbers we're adding is a 0 it'll fit, so let's get to that point at least.

So OK. Let's expand things out. We're adding any 2 bit numbers now, and let's pretend we've got like a calculator with a 3 bit display.

000+000=000. 001+000=001. 000+001=001. 001+001=010.

010+000=010. 011+000=011. 010+001=011. 011+001=100.

000+010=010. 001+010=011. 000+011=011. 001+011=100.

010+010=100. 011+010=101. 010+011=101. 011+011=110.

I'm being kinda redundant with showing 0+1 and 1+0 and such. Let's narrow these down to just the ones we need a new bit of logic to make happen though. The 1s bit is groovy. We feed the 1s bits of ANY two numbers into a XOR gate, we get the correct 1s bit for our answer. And if the next bits over are 0s, we can pop what's coming out of our AND gate in there out to there and that's fine too. We're also good if we just look at the 10s column, everywhere we don't need to worry about the 1s column affecting it. The places where we need to do more with our logic are just where we're doing the whole "carry the 1 thing." I already set up the grid of all these so that's just the stuff in the far right column, but hey, let me bold those up too.

And let me just kinda blank out these other bits so we're really focused in on the part where there's a problem...

_0_+_0_=_1_. _1_+_0_=_0_. _0_+_1_=_0_. _1_+_1_=_1_.

Well huh. If we're just looking at a bit in the middle of our big long number, and we're carrying a 1 to that position, we sure seem to be getting the exact opposite of what we get when we aren't carrying anything in here. So OK, let's redesign our logic circuit here. We've got our bit A wire and our bit B wire coming in like we did before, going into that XOR for this output bit, but we need to add a wire for whether we're carrying a 1 in from the next circuit over, and if so, flip that result. Do we have a way to do that easily? Well OK, logic chart time. If we have a 0 and no carry, we want 0. I'm lazy, so, 0 bla 0=0, 1 bla 0=1, 0 bla 1= 1, 1 bla 1 = 0. Oh, that's another XOR gate. We XOR A and B like before, and then just XOR that result with our carry bit, and we are definitely displaying the right thing in this part of our answer. Now we just need to double check if our corner case of handling a carry messes with the next carry anywhere and... oh damn yeah.

011+001=100, and 001+011=100. These are the cases where the 1s column carrying a 1 to the 10s column means we have to do something different with that carry bit. So, we're still making our carry-the-1 result a 1 if A and B are 1... but we also need to make sure it's a 1 if we are both carrying something in, AND our original XOR gate is spitting a 1 out. Well we can throw that AND in there, and we can throw in an OR to check either of these two conditions, and there's our new and improved carry-the-1? result.

So let's put it all together now!

For a given bit, we have value A, value B, and Carry. We have a XOR gate that takes A and B in. We feed the result of that and Carry into another XOR gate. That spits out the sum for this bit. Then we AND the result of that first XOR and our Carry feed that result into one side of an OR gate. We feed A and B into a second AND gate, the result of that is the other input for our OR. That OR now spits out a fresh Carry bit. We can plug that into the next adder circuit down the line, for the next column to the left in our result. BAM, there we go. Just clone this whole weird set of 5 logic gates for as many bits as you want to deal with, daisy chain those carry values into each other, and congratulations. You have somehow rigged together something where electricity goes in, electricity goes out, and the weird path it has to take along the way has this weird side effect where you can work out what two binary numbers add up to. Please note again that we didn't at any point make some sort of magical computer person and teach it how to do math, we just found patterns in how electricity flows and where the pure math concept of logic gates and binary math happen to work the same way and exploited that for a result that's convenient to us. Shame that was such a pain wiring up, but hey, every time you add another copy of this onto the end, you double the range of numbers you're able to work with. Eventually that hits a point where it's worth the effort.

Well addition is all well and good, what about subtraction?

OK, so just to take stock, so far we have a big addressed block of memory somewhere we keep our numbers in. We have, for example, 8 bit lines on our bus, and when we want to do addition, we set stuff that turns on "hey, place with our first number, put it on the bus" then "hey register A, read the bus for a moment," then the same to get a number to slap in register B, and we've got this sum register sitting between registers A and B with a bunch of these adder circuits hooked in between all the bits. We might have some leftover carry line with a 1 on it and nowhere to plug it in, but ignoring that spill-over, every bit on our bus is to go good for addition. When we're setting up command codes, we can make more to do some other math with A and B and that's all well and good, but we have a real big problem when it comes to subtraction, because out of what's going into A, going into B, and coming out of sum, at least somewhere we're going to need to deal with the concept of negative numbers. So when we're doing subtraction, one line on our bus needs to be reserved for whether it's positive or negative. If you program, you're maybe familiar with the concept of unsigned integers vs. signed integers? This is that. With only positive numbers, if we've got say, 8 bits to work with, we've got a range of 00000000 to 11111111 to work with, or 0-255 in decimal, but if one of those is getting swiped for negative or positive, now we're talking like, -127-127.

But wait, that's not quite right, is it? Like if we arbitrarily say that leftmost digit is 1 if we're negative, we get things like, 1 being 00000001, 0 being 00000000, -2 being 10000010 etc. but... what's 10000000? -0? That's the same thing is 0. That's redundant and also gonna really screw the count up if we're like, adding 5 to -2! Or really, any other math we're doing.

Oh and we also need to remember when we're stuffing a negative number into a memory register, it's not like that register knows what concept the bits we're shoving into it represent, so like, you personally have to be responsible for remembering that that 1 on the leftmost line, for that particular value, is noting that it's negative, and not that the 10000000s place or whatever has a 1 for some number, or the first of 8 switch variables you're stashing in this address to save on space is on, or whatever else. We here at the memory address hotel are just trapping electron wiggles in a weird little latch or we aren't. No labels, no judgements.

So OK no matter how we're storing negative numbers we need to just actually remember or take notes some way on what the hell convention we're using to represent negative numbers, and where we're applying it. But we also need a convention where like, the math works, at all. Just having a bit be the is it negative bit works real bad because aside from having -0 in there, we're trying to count backwards from 0 and our math module has no conception of back. Or of counting for that matter. Or 0. It's just a circuit we made.

OK, so, let's maybe store our negative numbers in a different way. You know how a car has an odometer? Rolling numbers counting up how many miles you've gone? And there's a point where you run out of digits and it rolls back around to 0? Well funny thing about our addition thing is if you add a 1 to a display of all 1s, that also rolls back around to 0 (and has that carry value just hanging out in space unless we have a better idea of what to plug it into). So if we like, have all the numbers we can display printed out on paper, and we represent that rolling over by just rolling the paper up and taping it, so we have a bit where the count is going like: ..11111101, 11111110, 11111111, 00000000, 0000001... well we can just arbitrarily declare that all 0s is really 0, and the all 1s before it is -1, etc. Try to make that work maybe. And still remember that 10000000 or whatever is where we abruptly loop back between the highest positive/lowest negative numbers we're handling.

Here's a funny thing though. If we start counting backwards, we totally get this inverted version of what we get counting forwards. Just going to show this with 3 bits for convenience but going up from 000 you go:

000, 001, 010, 100, 101, 110... and going back from 111, you go

111, 110, 101, 100, 011, 010, 001... and yeah, look at that with a fixed with font, and it's all just flipped. And huh, you know what else is cool? If we go back to saying the first bit is 1 for negative numbers and a 0 for positive, you can just add these and it almost works. You want to subtract 1 from 1, that's the same as adding 1 and -1. Invert the negative, that's 001+110=111... 1 shy of the 000 we want. Huh.

What about 2-2? 010+101=111. 3-3? 011+100=111. Everything that should be 0 is 111, which is 1 less than 0 when we roll over. What about stuff that should be positive? 3-1? 011+110=(1)001. 2-1? 010+110=(1)000. 3-2? 011+101=000. Still all 1 off if we just ignore that carry going out of range.

-1-1? 110+110=(1)100, which translates back to -3... and that's kinda the only example I can give that's in range with this, but throw in more bits and follow this convention and it'll all keep working out that you get exactly 1 less than what you want, turns out. So, if we're in subtract mode, we just... invert something we're bringing in then add 1 to it and it should all work out?

So OK. We have a wire coming into math land from what mode are we in, it's a 1 if we're doing subtraction. We XOR that subtract line bit with every bit of what's coming into B, that does nothing if we're in addition mode, but if we're in subtraction mode, we're flipping every bit, and tada, the subtraction works without any other changes. We just need to conditionally add 1 if we're in subtract mode now but... wait, we already have literally that. We just take this same "we are in subtract mode" wire and run it in as a carry-in to the rightmost bit of our adder chain. Again, if we're doing addition, that just carries in a 0 and does nothing, but if we're in subtraction, it carries in a 1, and... we're done. The explanation was a long walk, but yeah, when subtracting, just add those extra XORs, plug in that carry, and remember your negative numbers are all weird in storage. Done.

Let's do multiplication and division next!

No. We can't do that.

Well seriously, that's not a thing we can just layer on top of this relatively simple thing we have wired up. We've got this lean mean math machine will give you whatever result you need basically the instant you load values into A and B. Definitely by the time you, being conscientious about not leaving the doors to the bus open all the time, officially flag things to write out from sum and into whatever destination. Multiplying and dividing though, we need more steps, and we need scratch spaces for temporary values. I suppose if you're careful you can multiply by like, loading 0 into B, load the first number you want to multiply into A, just feed sum directly into B, and pulse the clock however many times you want to multiply, but... you probably don't want to just constantly be reading and writing like that, it's tying the whole bus up, unless you have an alternate pathway just for this, and you have to keep count. Still, I'm assuming that's how people do it when they build a dedicated function in. I'm still looking at older systems which assume you're going to do most of your multiplication one step at a time, running through some code.

There's one big exception though. If you multiply any number by 10, you just add a 0 onto the end of the number... and guess what? I'm not using "10" specifically to mean "ten" here. Whatever base you're doing your math in, that still works. So in binary, if you just want to specifically multiply by 2, it is super easy to just shift every bit to the left. Like, have some sort of "shift left/multiply by 2" wire come in, set up logical conditions so that when its set, all we do is have the bit that we are feed into the carry flag, then for the sum ignore everything but the carry flag. 00011001 turns right into 00110010. I picked that out of a hat, but that's binary for 25 getting doubled to 50 as I eyeball it here. Dead simple to do as a single operation. Shifting everything to the right, AKA dividing by 2 is similarly simple... and hey, you might notice that in say... very old games, there's a whole lot of numbers doubling. Like ghosts in Pacman? Each is worth twice the points as the last? Yeah that's because that's easy to do fast.

Other math though takes more steps, and tends to involve extra hardware design to make it work. Like if you're doing division where you aren't guaranteed to have a whole number at the end, so, most division? Suddenly you need to have decimal points in all of this, and work out where they go, and this is why you hear people talk about "floating point processors" as this whole special thing that we just did not have for decades. For now at least, that's beyond the scope of what I'm teaching. Might get there eventually.

A final bit about bits...

So hey, we need to pick some arbitrary bit count for how wide we make our bus and our registers, and also some number for memory registers, command codes, maybe other stuff. And you just kinda want to pick a nice round number. You can't pick ten though, because ten isn't a round number in binary. It's 1010. So usually, we round down to 8, nice and simple, or we round up to 16. And then if we're like filling out charts of values, it's easier to count in those bases. Counting in base 8 is easy enough. 0 1 2 3 4 5 6 7 10. With base 16 though, we need 6 more symbols in there, so we go with 0 1 2 3 4 5 6 7 8 9 A B C D E F 10. And sometimes people make a point of making the B and D lowercase, because if you want to display those on the sort of 7-segment display you still see on cheap clocks or things going for an 80s look, B and 8 are too similar, and D and 0. Base 16 is also called hexidecimal. People will shorten that to "hex" and you see it a ton when people are looking at raw data and don't want to get thrown by long binary numbers, and it particularly gets out to the general public when we're talking about like, 8-bit color values. 8 bits gives you a number from 0-63, hey that's just 2 digits in base 16, so like, for HTML color codes, you can use 2 digits each for red green and blue values, and technical artists just kinda memorize stuff like "right so FFFFFF is white, 700080 is a pretty bright, slightly blue-ish purple, etc."

We tend to go with 8 bits in most places though, or some multiple of 8 anyway, and someone randomly decided to call 8 bits a byte, and that's kind of just our standardized unit for measuring data now. Well mostly standardize. Because people will say, like, 1 kilobyte is 1000 bytes, but in practice people actually round things off to binary values and they're going to actually be off a little.

Anyway, linguistic trivia! Whatever number of bits it is we store in a register/load to the bus is called a "word" and we talk about how many bits long our words are, because once you design the architecture, you're stuck with it and all. And sometimes you want to be space efficient and not use a whole word, so you do some logic gate trickier to chop off whatever portion you don't need when reading it and not change what parts you aren't trying to when writing it and just kinda store multiple variables in a single value. One common thing that happens as a result of this is that you'll break up an 8-bit value because you just want like two values from 0-15 instead of one from 0-255. And when we're working with one of those half-bytes, because puns, the actually term for that is "a nibble." No really. And if we're using a single bit for a variable a lot of the time we call that a flag. Common to see a byte used to hold 8 flags.

I have no clue what I'm going to cover in part 5 if/when I get to it. Input/output stuff? Better program counter design than what I've taught you enough to guess at? Maybe I should just wait and see if people have questions?

For now let me just point anyone following along with this at this first post of me talking about the game console I'm designing. That's a pretty similar topic to this one.

Let me also point you again at my patreon, too.


Navigation: Part 1 (Components) - Part 2 (Logic and Memory) - Part 3 (Miniaturization and Standardization) - Part 4 (Binary Math) - Donate to keep this project (and its author) alive.