Why do bytes still matter when most computers are 64 bit? Shouldn't data be measured in bits instead...

Why do bytes still matter when most computers are 64 bit? Shouldn't data be measured in bits instead? And what even is the purpose of binary prefixes?

retard

Things are rarely measured in bits. Most architectures have the byte (8 bit) as their smallest unit of work.

Talking of everything in bits wouldn't change much, it would just be factors of 8 and harder to conceptualize and talk about.

>when most computers are 64 bit
What does that have to do with anything?

>And what even is the purpose of binary prefixes
Again, it's handy. Allows you to talk about the big numbers more easily.

what does 64 and 32 bit mean? im confused since theres only 8 bits in a byte

the 32/64 bit basically refers to the width of the adressing bus.

With 32 bit to encode the adress the processor wants from the RAM, you can only use 4 billion adresses. 64 bit makes that limit much larger, and we won't reach that kind of adressing space anytime soon. In other words, in 32 bit you can only use 4gb of ram, because your processor can't look into any memory address beyond that.

I'm sure there are other things at play too, but from my limited hardware understanding that's what I can tell you.

Memory works in bytes, you can't read a bit, you read a whole byte, so that's the smaller unit we use.

>the 32/64 bit basically refers to the width of the adressing bus.

Wrong. Most 64-bit processors don't actually have 64-bit indexing (its more like 48-bit).

bit is just 1 or 0, you need 8 bits to make a byte which will then represent something

well fuck, what do we do now?

4 bit is already enough

Don't worry about it because you never run into that issue (hence why they don't. It's to save costs of making a true 64-bit bus)

>well fuck, what do we do now?
pack our things and become accountants

> 4-bit bloatfag

I can name that tune in 2 bits.

Is this the hourly retard thread?

>Why do bytes still matter when most computers are 64 bit? Shouldn't data be measured in bits instead?
dudewhat

Bytes are still generally the smallest way to address stuff, as far as I know.

0x0 is a byte in memory, not a bit, 0x1 is another byte, exactly 8 bits behind 0x0...

Go represent numbers from 0 to 9 with 2 bits.
Nice kek, though

You haven't been able to read a single byte from RAM since the 90s

to add to that, bool types in most programming languages are also 1 byte, even if they only have to store either 1 or 0

but that may be for another reason

How fucking high are you?

I'm glad someone around here is of sane mind

Accounting is one the most steady jobs in the world and the pay always has the ability to set a higher ceiling

The job market is always open and wanting because most people absolutely hate anything with maths, even simple.

We should all become accountants and open our own firm.

I can make the logo

std::vector is optimized to store them in bits.

yfw accounting is the 1st job to be automated

This

I still have to take into account the size of words for my job as a software dev.

Protest Intel until they give us the other 16 bits we so rightly deserve

#busWidthMarch

Wow, I didn't know that

It also refers to the size of the data that can be processed in a single CPU cycle. You can use 16bit numbers on a 8bit CPU but it takes more cycles to complete the operation. Same with 64 but numbers on a 32bit machine and so on.

It's much quicker to deal with numbers if the align to a consistent boundary, then you can increment the pointer to it by a fixed value each time you do a step. Also the processor will shuttle the whole 8 bits through the CPU, in the case of an 8bit computer, each cycle so there's no performance gain from making bools 1bit.