CPU

How are CPU's designed and made?
Do they just use tons of abstraction to map out the logic with billions of boolean gates and then just have machinery which is programmed to draw each type of gate physically on the silicon of the chip?

Other urls found in this thread:

youtube.com/watch?v=vK-geBYygXo
youtube.com/watch?v=35jWSQXku74
forums.macrumors.com/threads/apple-considering-adopting-amd-processors-for-upcoming-macs.898409/page-25#post-9746191
youtube.com/watch?v=NGFhc8R_uO4
youtube.com/watch?v=9SnR3M3CIm4
visual6502.org/JSSim/index.html
visual6502.org/sim/varm/armgl.html
visual6502.org/JSSim/expert-6800.html
twitter.com/SFWRedditImages

Probably
The point of silicon wafers is that you can just laser in whatever circuit or die you like, no physical contact needed

Do EE.

>laser in
how?

With lasers

lol

A CPU has at least 7 or 8 layers of copper wires that form all the connections and transitors.

They bake on the lithography projections with a special lamp and chemicals, use more chemicals to burn off the parts not exposed to light, then fill the gaps with copper.
They just rinse and repeat.

Fucking magnets.

>fill the gaps with copper
So what's even the point of silicon being a semiresistor? Can't they just replicate this with fucking wood if theyre just putting copper into a CPU mold?

why don't you look up what a transistor does

I am enlightened! Thanks anons!

All the layers are made by doping Si with other elements, oxidizing and so, copper is used for external connection only IIRC.

I doesn't turn a hole full of copper into a switch
The photo you posted shows a conductive layer already?

>How are CPU's designed and made?
Engineers design the high level arch. Dozens to hundreds of technicians will work on each given area of the low level arch, and how everything links together. When this is completed on the theory side designs are laid out in software and simulated which takes an absurd amount of time. Simulated designs once validated will be sent to a fab and taped out for production of early engineering samples.

The actual manufacture of these chips is an incredibly complex. The process of photolithography involves uses lith masks, photo resist, a high energy light source, and repeatedly laying these masks over a starting silicon wafer to slowly erase channels, then fill them with new materials to build up structures one layer at a time. This is all after the starting wafer is heavily doped.

There dozens of videos on youtube describing various steps of the process.

10-13 metal layers is common for high performance ICs.
Tons more materials than copper are used too.

As for the logic then yes, by layering abstractions it ends up being feasible to develop chips with billions of transistors.

Are all the blocks hand-designed and then linked together also by hand? I mean calculating transistor dimensions, doping intensity and stuff like that. That's the way we were shown on the uni. However working on this level with billions of transistors sounds incredible. Isn't there some "autorouter" for this?

A layer of transistors on the bottom with several layers of metal connecting them. For physically making it they use acid etching to get that nice 14 nm size. For tools you use everything from transistor layout all they way up to placing logic chunks onto the wafer in a GUI.

It used to be, and for some less complex things like micro controllers a high degree of hand drawn circuitry is still common. Anything more complex will be heavily automated though. Software handles an awful lot now.

>transistor dimensions
Isn't it 14nm X 14nm or am I retarded
I'm sure all the design are assuming the same machine is used for all the layers

The gates are hand designed, and then those are used in the logic and automation tools.

>Isn't it 14nm X 14nm or am I retarded
Its far more complex. There are dozens of metrics per metal layer, though people usually focus on the front end.
With FinFETs specifically you have fin pitch which is the height of the fin, your gate pitch is the distance from the edge of one gate to another, the nominal width of the fin itself, the effecting gating surface area of the fin, and a host of other things.

Back end metrics are even more complex because of how the metal layers increase in size until they reach pin out.

That's what I though. Might be pretty interesting to see how these tools work.

Cross section of a back end.

The "back end" isn't running at 5GHz though right? It would make sense that path size makes less of a difference there with no switching heat or voltage drains

Funny how no one in these threads actually explains how they are made.

Spouting some buzzwords isnt making a cpu retards.

Like what the difference in making an ARM chip vs a x86 one?

Just the layout of the transistors?

it is very painful

it should be mentioned that a lot of the time it just fails. chip is tested and doesn't work right. and all chips will have slightly different speeds and temperatures.

kinda strange to think that the heart of a computer has so much fuzziness in it.

yes

So the drop of acid etching it is less than 20nm thick
What are they using to lay this acid down accurately?

Thread explaining this whole topic would be pretty long, desu.
Nobody is stopping you from finding the answers and presenting them here yourself.

Well it wouldn't if we had a more discrete CPU manufacturing process than spraying nanometers of acid onto metals and hoping it was done correctly

I don't think anyone ITT really understands what's going on

here's a pretty good rundown

youtube.com/watch?v=vK-geBYygXo
youtube.com/watch?v=35jWSQXku74

They should use magnets.

you think of a better way and i'll pay you 1,000,000,000,000 dollars for it.

They layer a powder like substance that doesn't get eroded. There is like 5 or so steps that is repeated ad nauseum on YouTube I'm sure you can find.

>tfw you will live to see optical processing in your lifetime

> That picture for the second video
Transistors don't look like they anymore

>The "back end" isn't running at 5GHz though right?
Well yes there is nothing switching there. It the routing for power into and I/O for the chip.
The feature size of each back end layer is just as tightly controlled as the front end. I never got into complex signaling in school so thats all over my head. A field for masochists.

>Like what the difference in making an ARM chip vs a x86 one?
They're different ISA, they have different high level and low level arch.

>Just the layout of the transistors?
Just like the difference between a Kia and a BMW is the layout of the metal.
Go fucking slit your throat.

There are all kinds of tools and different methods, like atomic layer deposition. Different nodes at different fabs use different methods to achieve similar end results. Like I said before, there are plenty of youtube videos going into the specifics.

That's the way electronics work in general. You have to calculate with huge margins and still it sometimes just doesn't work.

t. Dumbass

This shit is incredible. I'm a brainlet so ITT is like reading some alien instruction manual. To think some humans pull this off and teams do this every day blows my mind.

forums.macrumors.com/threads/apple-considering-adopting-amd-processors-for-upcoming-macs.898409/page-25#post-9746191

Interesting story from an ex-AMD engineer before the Bulldozer launch on CPU design.

>I don't know. It happened before I left, and there was very little cross-engineering going on. What did happen is that management decided there SHOULD BE such cross-engineering ,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style. This results in giving up a lot of performance, chip area, and efficiency. The reason DEC Alphas were always much faster than anything else is they designed each transistor by hand. Intel and AMD had always done so at least for the critical parts of the chip. That changed before I left - they started to rely on synthesis tools, automatic place and route tools, etc. I had been in charge of our design flow in the years before I left, and I had tested these tools by asking the companies who sold them to design blocks (adders, multipliers, etc.) using their tools. I let them take as long as they wanted. They always came back to me with designs that were 20% bigger, and 20% slower than our hand-crafted designs, and which suffered from electromigration and other problems.

>That is now how AMD designs chips. I'm sure it will turn out well for them [/sarcasm]

Yeah pretty much

>can't even capitalize "CPU"
>calling anyone else a retard

Would you prefer someone post a transcript of an entire VLSI course?

Most modern designs are coded in a special programming lanugage for describing logic circuits - a Hardware Description Language (i. e. verilog or VHDL).
The design itself, as well as test-benches to verify its correctness (similar to unit tests in software development) is then ran through a simulator to check that all is OK.
The design is then automatically synthesiszed into silicon using specialized tools (Cadence suite is one such example, I'm not sure if the major vendors use their own tools), the process goes roughly in the following steps:
1. Code the design and testbenches.
2. Behavioral simulation - verify that circuit works in an "ideal" state, aka. if the code is correctly written.
3. Synthesize the code - the code gets implemented as logic gates (e. g. NAND, depending on tech used).
4. Simulate synthesized design - check if the synthesis is ok, also check if delays introduced by (still idealized) gates is small enough to not affect functionality (e.g. clock rate target for processor).
5. Map the logic gates into transistors and place & route them onto a Silicon block.
6. Perform final verification if implemented design is still OK.
7. Send to fab, where the Silicon is doped and etched, copper traces added.
8. The die is then cut, the individual chips packaged into the housing selected. Here, the Silicon I/O pads are bonded to the packageleads.
9. Test chip for functionality. If everything is OK at this step, the whole process was sucesfull.

I think this is roughly the chip design process used by most IC manufacturers. If you can imagine, the final tests before manufacturing involve simulating the workings of billions of transistors for it to work - the computing power needed is enormous for more complex designs - I know that NVIDIA for example has a enormous computing platform just for simulating their chip designs.

All I can tell you is they only make one CPU die, then they disable features like extra cores, cache, overclocking support, etc. and sell it as a low-end model. That means, if you buy a Celeron G3930 today, it started life as a fully functional i7-6700k, and then they took a little laser and cut all the lines going to the fancy features. Seems evil, doesn't it?

To add on to this user's post, here's an Intel process engineer walking through the manufacturing process (1 hour talk):

youtube.com/watch?v=NGFhc8R_uO4

They use copper to wire between the silicon transistors because it has lower resistance than silicon.

VHDL and verilog are just glorified specification writing tools

No, it seems smart considering nobody seems to care that Intel spent trillions of dollars making the CPU and that half the market wouldn't pay for a full price full functionality CPU
This is literally just Intel recognizing that CPUs cost pennies in materials and trillions in the factory
AMD does this too
So do nvidia
So does qualcomm
Surprise, this tech actually doesn't double in performance every single year anymore, how are these companies gonna profit if people only buy from highest-end to highest-end (which isn't much improvement)

+1 internet for the person who knows why court is used in favor of gold or silver.

These statements are so old they're entirely irrelevant. All of intel's designs are fully automated now, just like recent designs from AMD. Software is almost always better, and ICs so complex that it is impractical for people to try to do a layout by hand.

AMD's last line of hand designed arch was Zambezi and Vishera. Steamroller included some hand laid blocks, but started utilizing machine layout to save die area. Excavator is fully automated, as are the cat cores.

Oh my.
>semiresistor
This one is new to me.

The conductive layer links the sources and drains across transistors, not within transistors. What you see in that image is just describing how the photomask allows etching to form paths designed to link transistors together so the operations of a cpu are possible, not how a transistor itself is made. Making a transistor uses etching too but it's got so many layers of different shit deposited I can't name them all.
A semiconductor is what refers to the main building block of transistors (and other devices), that wafer is pretty fucking useful because it's highly flexible in terms of electrical properties.

That's a pretty rough tl;dr, but understandable I hope.

Plus this means that'd have the throw away more than half their dies I'm sure
Sometimes the paths don't support full hyperthreading, sometimes just a core or two is broken (I think all the skylakes start off as quad and all of the pentiums are true dual core dies)
Bam, relabel it for extra profit and extra market

youtube.com/watch?v=9SnR3M3CIm4

Here you go friend. Live and Learn. Thank you.

Did you know that many leads on CPUs don't actually lead anywhere?

They generate them randomly and that design just happened to test better than all the others, even if they have weird pathways that stop for no reason.

They are de facto standards for specification writing if you want to look at it from that point of view. For the scale of IC design you need automation to manage the complexity - you could write the specs in a different language, but in the end, you need to interface with other programs, and HDL languages are usually supported directly.
As I said, I'm not directly familiar with what each major player is using, but I'm willing to bet they either use verilog, VHDL, or a custom framework in c, ada or similar that is functionally similar to a HDL.

what defines x86?
Is it even a physical difference?

citation needed

Instruction set

Weren't there even sockets with useless pins a while ago?
Why aren't they repurposed into ground or main voltage leads or something?

Yes.

Its an ISA.

So what would prevent me from altering x86 to run on any processor? Copyright? The arrangement of the cores and switches?
If I could make a CPU, could I just make x86-compatible dies completely on the physical side (assuming I knew how)?

x86 is defined by it's instruction set

just admit that you don't know what you're talking about

I think the basic x86 patents (not the x64 ones, thats a different beast) have expired. So in theory, you could produce a new x86 core that could run the x86 software that doesn't need new features. It would be difficult not to run into some patents on other parts of your design if you expect any decent performance out of them though.

Yes, especially the step 5 piques my curiosity.

>58842218
Thanks user. I'll add that to my watch list.

Holy fug please just read a book. Scraping enough big words together from shitty explanations on Sup Forums to sound smart is not learning how this stuff works.

I don't I thought that was clear from the very beginning

ISA is instruction set architecture. It is a physical implementation of hardware units that process the given types of instructions. X86 is not ARM because it does not have hardware for processing ARM instructions in the way the ISA describes them.
You can emulate instructions through software, you can make a translator at a hardware level even, there are numerous examples of this.
Go on wikipedia and at least gain a cursory knowledge of things before asking dumbshit questions.
Or just go slit your throat.

I'm not trying to saline smart I'm trying to sound stupid
"it's an instruction set" doesn't mean shit to me, is that an internal code? Is that on the CPU itself? Is that part of motherboard chipsets?
Don't tell me to l2cpu I'm trying to l2cpu

>ask question
>DURRR U DONT KNOW ANYFINK

You're a fucking genius aren't you

So does that mean all x86 processors are physically arranged in the same kind of way
What's stopping AMD or Intel from making a new instruction set? It's just a lot of work?

Kek, wew.

Let's guess OP's age:

15

The details are kind of very specific if you don't work on Place and Route (P&R) tools directly. I only have a rough picture of how they work.

In general, after synthesis (step 3 and 4) you get a list of logic gates and how they are connected to eachother. The P&R tool then:
- places the logic gates onto the silicon - it already has definitions of the masks which are used - usually designed by hand and provided by the fab house for each process individually.
- tries to connect them - if it works, the design was placed and routed sucesfully.
- if not, various tehniques are used (e. g. starting from begining with a different order, moving some parts around, replacing only sections of the design). This usually also involves shortening long paths to get better characteristics (e. g. shorter delays where needed, or longer ones if the design requires it).

If you want more info, I think searching for place and route logic on google scholar would be the best path to learn the specifics.

>So does that mean all x86 processors are physically arranged in the same kind of way
In a vague way, yes. AMD and intel have their own varying implementations, because there is no single way to handle instructions. Just like there is no single way to make a vehicle with 4 wheels.

>What's stopping AMD or Intel from making a new instruction set?
Nothing. They've each tried. Intel made Itanium which was a commercial failure. AMD made the AMD64 extension to the X86 ISA. All 64bit X86 CPUs are using AMD's AMD64 extension.

Backwards compatibility basically. Intel does have an additional Instruction Set (Itanium), but it is mostly used for servers.

>AMD's last line of hand designed arch was Zambezi and Vishera. Steamroller included some hand laid blocks, but started utilizing machine layout to save die area. Excavator is fully automated, as are the cat cores.

Interesting, any sources?

You can't know how they're made, since they're not open-source.

It's like programming languages

C++/C is a specification. It's a description of a language. Someone must write a compiler. My compiler might be different from your compiler.

Same thing with architecture. My chip might be different from your chip but they still use the same assembly

for you

Anandtech had big articles for every iteration of the Bulldozer family.
Extremetech has a number of relevant articles as well written by Joel Hruska.

AMD did make a new instruction set- it's called AMD64 (or as Intel calls it, x86-64). It is quite different from x86.

Here's a way to visualize ISA-
1. Imagine the alphabets as bits.
A,B,C,D....Z -> 0,1
2. English is one ISA on how to arrange these letters into something meaningful. For example, 'work' is a word that requires w-o-r-k together and means job.
French is another ISA that also uses the alphabet but has different rules. "travail" (work) is a word that uses the letter t-r-a-v-a-i-l.

Grammar is very similar, with space and punctuation as additional rules.

3. A CPU is a device that understand these rules and make sense of out them. You can try to make a CPU designed for ISA A to run instructions for another ISA B if you can translate instruction B to instruction A (French->English)

4. But it is pointless to change the hardware implementation of a CPU, just as it is almost impossible to hack your brain to change your understanding of English to French.

5. AMD and Intel (and others) x86 CPUs all understand the x86 ISA. Just as a Englishman and a Japanese understand the grammar/vocab of English, regardless of where they learn their English.

The way they process English, however, can be drastically different inside their brain. Goggle Translate also sort of understand English, but internally it is completely different from how a human mind works.

TL:DR ISA describes the rules, not how to process the rules.

Considering CPUs these days have transistor counts in billions, they have to abstract as much as humanly possible.

High level design is about connecting ready made functional blocks and using software to simulate their cooperation - you take, say a decoder, two ALUs some cache and try to work out how they integrate.

Each company has libraries of several versions of those blocks in several versions and design on the highest level is mostly about finding what of those blocks need to be redesigned and/or improved on the logical level.

Each block is a collection of smaller blocks, which are more concrete but still quite divorced from actual silicon. This stage is usually done with HDL languages - VHDL and Verilog and proper software suite for validation and simulation.

Actual silicon is 'designed' in the process called synthesis - which is similar to compilation for actual software. Synthesis takes HDL code and tries to create actual gates, latches, multiplexers and whatever which then are transcribed into FPGA, silicon or whatever you wish.

Overall - the process is quite similar to programming actual computers. System architects design functionality, programmers design technical details and compilers create machine code.

Source: I am a work for an EDA company as a senior software developer for graphical design tool which generates HDL from a block diagrams.

Let me add that the entire process of P&R is close to black magic these days and each provider of this service is _very_ secretive. EDA is a very hermetic circle and most people of influence know each other.

Talent predation is also very common.

You deserve to gassed, you fucking shit stain.

Cry more, little kid.

Yep, this exactly. The last human-designed CPU was the 6502.

fun fact x86 has more pages of documentation than the 6502 has transistors

Get raped and kill yourself, you retarded fucking faggot sack of nigger shit with down syndrome.

The mask set for that processor was literally drawn on huge pieces of paper with colored pencils.

Did something trigger you, little tumblrtard?

What's sad is that even with all of today's technology such a feat is probably not possible any more because the equipment and skill sets don't exist.

How many photo-lithographers under the age of 60 are there?

Do you literally have shit for brains, special olympian?

>Implying photolithography has no uses
Why don't you build a CPU with lithographs?

visual6502.org/JSSim/index.html

Here's something neat to stare at in amazement. An accurate representation of a 6502 processor core which actually works and you can press the play button to watch it process stuff or use the step by step function and you can watch the individual wires light up with each thing.

Thank you. I completely forgot this existed.

They actually added two other simulators apparently.

ARM1
visual6502.org/sim/varm/armgl.html

Motorola 6800
visual6502.org/JSSim/expert-6800.html