If you are allowed to use negative and positive numbers...

If you are allowed to use negative and positive numbers, what is the smallest and largest number you can store in 2 bytes?

Other urls found in this thread:

steve.hollasch.net/cgindex/coding/ieeefloat.html
youtube.com/watch?v=9hdFG2GcNuA
en.wikipedia.org/wiki/Word_(computer_architecture)
en.wikipedia.org/wiki/Daughterboard
twitter.com/NSFWRedditImage

About three fiddy if you use some form of floating point or exponential representation

2^16 = 65536
65536 / 2 = 32768
So 2 signed bytes goes from -32768 to +32767

It can range from -2^15 to 2^15-1

let 00000000 00000000 represent -infinity and
11111111 11111111 represent infinity. Inb4 set theory and higher infinities, those aren't real numbers.

Anyway you want half floats or 16 bit float numbers.

Infinity isn't a real number either, fagtron.

>retarded x86 16-bit "word"
TRIGGERED

Why is it called a word if there's only numbers in there?

This. Infinity is an imaginary number, that's where "i" as in infinite/imaginary comes from.

do you're own homework

How about something more reasonable, but still in line with OP's retarded question.
>1 sign bit
>15 exponent bits
>treat it as a floating point with no mantissa, and the base is always 2
>Has a range of -2^32767 to 2^32767

You could do even better, the more retarded you make your encoding. For example, make the exponent's base 1 billion rather than 2.

Floats don't do infinities.

They have have a sign bit, a few bits for the actual number and then the rest is used for magnitude. They also use am implicit 1 bit.

For example the number 30 is stored as 1.111 with a magnitude of 4.

IEEE floating point numbers do in fact have infinities.

>steve.hollasch.net/cgindex/coding/ieeefloat.html

The values +∞ and −∞ are denoted with an exponent of all 1s and a fraction of all 0s. The sign bit distinguishes between negative infinity and positive infinity. Being able to denote infinity as a specific value is useful because it allows operations to continue past overflow situations. Operations with infinite values are well defined in IEEE floating point.

i is for imaginary number that is a square root of -1, not for infinity

Don't be retarded.

Although under the Reimann sphere -∞ = ∞ = i∞ = -i∞

You are fucking stupid

Not an argument.

OP never specified "float". If you're free to make 16 bits "represent" whatever you want, then you can just make them represent whatever you want. Who would have guessed. OP is the one who's either a clever baiter or fucking stupid.

Why is it called CPU when it's not perfectly aligned in the center of the mother board?

Seeing how the question doesn't specify any continuous range requirement, just make some value represent an infinitely large and some other an infinitely small number.

>floating points
youtube.com/watch?v=9hdFG2GcNuA

Why is it called a motherboard when there are no child boards?

Speaking of retarded...
en.wikipedia.org/wiki/Word_(computer_architecture)

>what is the largest number you can store in 2 bytes
depends how big your mouth is

Why do they add the autistic music? Jesus christ I want to punch the guy that made that right on hus autistic underbite

I know what a word is, retard. Unfortunately, x86's antiquated definition of a "word" has had farther reaching consequences. For example, you see DWORDs scattered all over shitty windows code, written for an OS that has been effectively 32- or 64-bit for much longer than it was 16-bit. They should have redefined "word" with the 386 to mean 32-bits, in correspondence with the 32-bit mode that the processor operates in for a vast majority of any use.

Lol.largest and smallest integer is 15^2. Cmon mate. Floating point is the same, you decide where the "point" sits.
Cmon mate.

>I know what a word is, retard. Unfortunately, x86's antiquated definition of a "word" has had farther reaching consequences.
Intel's definition of word is the same as the rest of the world's: an integer of native size, which on x86 is 16 bits

>For example, you see DWORDs scattered all over shitty windows code, written for an OS that has been effectively 32- or 64-bit for much longer than it was 16-bit.
This is a microshit thing, not an intel/x86 thing
You don't see this in linux source code

>They should have redefined "word" with the 386 to mean 32-bits, in correspondence with the 32-bit mode that the processor operates in for a vast majority of any use.
Not sure *who* should have redefined it, but it doesn't make a difference: Backwards compatibility is the sole reason Wintel is dominating the world. They can't afford to break shit for no actual purpose.

Most likely there are

en.wikipedia.org/wiki/Daughterboard

Idk if this is bait but I lol'd