X86 thread

x86 thread

Other urls found in this thread:

eprint.iacr.org/2013/338.pdf
en.wikipedia.org/wiki/Hardware_random_number_generator#Physical_phenomena_with_random_properties
en.wikibooks.org/wiki/X86_Assembly/Introduction
challenges.re/
twitter.com/AnonBabble

movl $1, %eax
int $0x80

>at&t syntax

who 80186 here

if I set up a monitor instruction and then use mwait, but never trigger a monitor event will the OS ever regain control over that CPU? Can mwait be interrupted?

nevermind I should have just used google
>A store to the address range armed by the MONITOR instruction, an interrupt, an NMI or SMI, a debug exception, a machine check exception, the BINIT# signal, the INIT# signal, or the RESET# signal will exit the implementation-dependent-optimized state. Note that an interrupt will cause the processor to exit only if the state was entered with interrupts enabled.

What instructions should of stayed out of x86?

m68k here

x86 is shit

FPU instructions, SMI/SMM instructions, push/pop (just sub/add [R/E]SP), in/out (just use memory mapped IO), segmentation of FS/GS on x86-64. Probably more but that's what I can think of right now.

RDRAND/RDSEED. They are backdoors.

Why?

the only instructions you need are move, add, and branch if equal

68k is the patrician architecture.
x86 is a travesty, that thing should've remained in a n embedded system.

>impossible to audit
>linus had it removed from linux kernel /dev/urandom code
>intel is part of PRISM

>impossible to audit
The same is true of the MUL instruction you're using as part of your PRNG. If you want to believe intel has deliberately built logic into their RDRAND mechanism which inserts weaker entropy based on some arbitrary conditions that are only met when using it to produce cryptographic keys, then you might as well argue that intel has deliberately built magical logic into their instruction decoder which detects crypto code and calculates a weaker result instead. Both are about equally far-fetched.

>intel is part of PRISM
So stop using their CPUs, problem solved?

I don't get this approach of “use their CPUs, b-but don't use them fully because they might be evil!”. It makes no sense to me. Either they're evil or they're not, get your story straight.

If they're evil, there's no hope you have in using their CPUs at all, because it would be trivial for them to place whatever backdoors they want into ring -2 without even needing to fuck with the crypto.

So either stop using their CPUs or just accept the fact that you're trusting them and use them fully. Anything else makes no sense IMO

>The same is true of the MUL instruction you're using as part of your PRNG.
You can tell if the results of MUL are incorrect, obviously. You can't tell if RDRAND/SEED "really" generated random numbers. It would be trivially easy to give it enough bias that the NSA could guess your private key used in SSL.

>you might as well argue that intel has deliberately built magical logic into their instruction decoder which detects crypto code and calculates a weaker result instead. Both are about equally far-fetched.
Once again that is easily checked, even if you don't trust the entire system you can just send the ciphertext to another computer and compare results. Intel wouldn't risk their reputation for something so obviously discoverable.

>If they're evil, there's no hope you have in using their CPUs at all
Yea there actually is, because you can compare results of calculations on other devices. The only instructions you can't trust are the ones that may compromise your security model, or have no public algorithm that can be duplicated on another machine.

Te only way to be safe is build your own cpu form discrete transistors and then build your software stack from scratch. Even compilers can be compromised.

>linus had it removed from linux kernel /dev/urandom code

So the madman actually removed it? I remember when someone made a change.org petition to remove it and he replies it with shit like:
>where can I make petition so people stop being stupid
>i-it's just another seed! it's not exclusive!
>read the code because you didn't. learn about cryptography because you don't know shit about it

Most modern CPUs are "safe enough", depending on your security model. I doubt there's a backdoor that will give an NSA attack full SMM/ME access, because intel would go completely bankrupt if the exploit ever went public. Follow the money, if a company won't be caught for an action and the government pays them for it then they will do it, if there's a chance of being caught then they won't.

>You can tell if the results of MUL are incorrect, obviously.
You can tell if the results of RDRAND are random, too.

>You can't tell if RDRAND/SEED "really" generated random numbers. It would be trivially easy to give it enough bias that the NSA could guess your private key used in SSL.
If the RNG has enough bias for the NSA to be able to use that bias to weaken your keys, then a statistical test would also reveal that bias.

If the RNG passes all known entropy tests, then we can conclude that it's as good as all the best of our CSPRNGs.

This stuff really isn't rocket science.

>Once again that is easily checked, even if you don't trust the entire system you can just send the ciphertext to another computer and compare results. Intel wouldn't risk their reputation for something so obviously discoverable.
What if your CSPRNG is seeded by things like local disk access latencies, interrupt timings and other things that goes through the CPU? That's basically how we do randomness right now, and the CPU is just as capable of biasing all of that data as it would be capable of biasing the results of RDRAND. Except of course, our current approach has serious drawbacks when you're using stuff like virtualized or predictable hardware. Enjoy your weak crypto keys in VMs, I guess. RDRAND is a strict improvement, the only people opposing it are the people who don't understand how random number generation works in practice.

Heck, you probably don't even understand how entropy pooling works, and how you can combine multiple sources of entropy into your entropy pool in such a way that even a completely malicious source can't degrade the security of your entropy pool.

>Most modern CPUs are "safe enough", depending on your security model.

B-but I want a valid excuse to build my own discrete transistors computer...

>where can I make petition so people stop being stupid
>i-it's just another seed! it's not exclusive!
>read the code because you didn't. learn about cryptography because you don't know shit about it
He's right about all of those, you know. God fucking damnit I don't even work on linux and I already feel sorry for the linux kernel maintainers. Users are the most uninformed clueless pieces of shit in general.

Which books would you guys recommend to learn x86 assembly language and low level programming in general?

>You can tell if the results of RDRAND are random, too.
Wrong.

>If the RNG has enough bias for the NSA to be able to use that bias to weaken your keys, then a statistical test would also reveal that bias.
Statistical tests are easily fooled if you give it a psuedorandom bias based on some seed only known by the attacker, probably a time-derived number.

>If the RNG passes all known entropy tests, then we can conclude that it's as good as all the best of our CSPRNGs.
Which still doesn't prove its not biased in a way only known by the attacker.

>This stuff really isn't rocket science.
Actually most people do crypto completely wrong, and make very dangerous assumptions. See your above statements for examples.

>What if your CSPRNG is seeded by things like local disk access latencies, interrupt timings and other things that goes through the CPU?
There's not always enough randomness especially early in boot which is specifically what RDRAND was designed to "help" with. Disk access latencies, interrupt timings, and even user input is all ridiculously easy to predict as well. Would they be able to bias every random number, definitely not. But 1% of some of the most important ones? Maybe.

>Enjoy your weak crypto keys in VMs
Guess what? That's what most servers are run on these days.

>Heck, you probably don't even understand how entropy pooling works, and how you can combine multiple sources of entropy into your entropy pool in such a way that even a completely malicious source can't degrade the security of your entropy pool.
There are severe weaknesses in the current implementations of entropy pools
eprint.iacr.org/2013/338.pdf

nice quads
Here's some:
Art of Assembly Language, 2nd Edition
Hacker's Delight (2nd Edition)

Dumb question, would something like a "noise generator" (think radio statics) work as a secure source for random numbers?

Yes, but I wouldn't trust radio sources. I had an idea at one point, but then RDRAND came along and I never built the thing.
It essentially was a geiger counter attached to a USB controller to generate random numbers via USB.

>Yes, but I wouldn't trust radio sources.

Of course. It would need to be shielded from powerful radio stations to work well.

>It essentially was a geiger counter attached to a USB controller to generate random numbers via USB.

Don't commercial random number generators work the same way?

I don't know if they detect particle radiation or not. I never looked into how its done commercially.

>Wrong.
Fine, it depends on your definition of “random”. To me, “random” means “we're not capable of detecting a pattern”, which RDRAND is. (Unless you want to disprove this by demonstrating a pattern to me, which I would be more than happy to see)

>Statistical tests are easily fooled if you give it a psuedorandom bias based on some seed only known by the attacker, probably a time-derived number.
RDRAND takes no input, so it clearly can't depend on victim-provided data. So in other words, you are now essentially suggesting the existence of magic circuitry inside the intel CPU that detects the current time and triggers on an evil mode after that timestamp passes. Fine, then justify to my why your instruction decoder, SMM, ring -2 circuitry, interrupt handlers etc. aren't doing the same; and why only disabling RDRAND will foil intel evil's plans.

>Which still doesn't prove its not biased in a way only known by the attacker.
The same assumption is true for all of the CSPRNGs we use in practice, as well. How do you know AES doesn't have a bias in it known only to the NSA as well? Actually, considering RDRAND uses AES to generate its entropy stream, that would pretty much be the same thing as what you're suggesting.

>There's not always enough randomness especially early in boot which is specifically what RDRAND was designed to "help" with. Disk access latencies, interrupt timings, and even user input is all ridiculously easy to predict as well. Would they be able to bias every random number, definitely not. But 1% of some of the most important ones? Maybe.
Wait, let me recap. You're arguing that a technology designed to give you sources of entropy in situations where you would otherwise have poor entropy is bad because it could give you poor entropy? Got it.

>Guess what? That's what most servers are run on these days.
Yes, thanks for supporting my point!

(cont)

>There are severe weaknesses in the current implementations of entropy pools
Yes, because Linux uses a piece of shit homegrown 20-year old entropy pool that uses a mix of fucked up half-MD4 primitives. Good job linking me a PDF that confirms what we know. Now try convincing the (((kernel maintainers))) to switch to a superior entropy pooling mechanism like Yarrow or Fortuna, which other operating systems have been using forever but the guy maintaining the Linux code is desperately continuing to grasp at straws to avoid switching to.

Also, you haven't responded to my question for why you believe sources like interrupt timings and disk seek latencies as reported by the CPU are supposed to be any less biased than RDRAND.

>FPU instructions
Why?

Radioactive decay is one way of doing it, but normally commercial hardware RNGs use something faster. For a good list of available methods, look no further than the summary at en.wikipedia.org/wiki/Hardware_random_number_generator#Physical_phenomena_with_random_properties

>RDRAND takes no input
Ostensibly. Howevr, it could theoretically access anything that's at most in L3 cache and closer, main memory access would be detectable so it won't be doing anything with that.

>Fine, then justify to my why your instruction decoder, SMM, ring -2 circuitry, interrupt handlers etc. aren't doing the same; and why only disabling RDRAND will foil intel evil's plans.
They could be backdoored, however I doubt it because of purely financial reasons. Even if they blamed some intentional backdoor on a "bug" they still would have lost a huge ammount of trust, and therefore stock value. Any proof of RDRAND being backdoored would be so deeply buried (quite literally, as it would be between layers on the die) that no one will ever find it.

>How do you know AES doesn't have a bias in it known only to the NSA as well?
Well seeing as NIST chose it over SERPENT, i'm inclined to believe there is some weakness in it. Actually it's my hypothesis that given enough time with a theorom prover finding equivilence between instruction sequences, all modern encryption algorithms will be weakened significantly, although I doubt its intentional in most of them.

>Wait, let me recap. You're arguing that a technology designed to give you sources of entropy in situations where you would otherwise have poor entropy is bad because it could give you poor entropy? Got it
>turning a 50% of knowing kernel address space layout into a 99% chance isn't valueble for an attacker
just guessing at numbers, but I doubt i'd be far off.

>Also, you haven't responded to my question for why you believe sources like interrupt timings and disk seek latencies as reported by the CPU are supposed to be any less biased than RDRAND.
They're not. We need open source hardware random number generators.

The x87 FPU used to be co-processor. If it stayed that way instead of being merged onto the same chip, then GPUs would have simply replaced it and we wouldn't be wasting transistors by duplicating functionaltiy.

>Any proof of RDRAND being backdoored would be so deeply buried (quite literally, as it would be between layers on the die) that no one will ever find it.
Again, if RDRAND is biased, you can detect that bias. You aren't convincing me with your hypothetical “it could be backdoor'd on the first of january 1970 in an alternate universe!” argument, because it doesn't apply to me.

If we can't detect any bias right now, then it's not backdoor'd right now. And your “intel won't make a time-based backdoor because it would be detected and ruin their customer trust in the future!” applies to RDRAND as well.

>Well seeing as NIST chose it over SERPENT, i'm inclined to believe there is some weakness in it.
AES was faster than SERPENT, which is why it was chosen over AES. I'm not sure if you understand how cryptographic cipher design works, but there's no way to “prove” something is secure, or to “measure” security.

All of the finalists have sufficient security, i.e. there are no known attacks on them and the number of rounds is sufficient to provide whatever hardness guarantee they advertise. There's no real debate going on about what's “more” or “less” secure of the AES finalists, because they're all “secure”. It's not a property we can evaluate.

However, speed is a property that we can, and AES was significantly easier to implement efficiently in software than SERPENT, and this is something people can easily agree upon. So that paints a clear picture for preferring AES over SERPENT, given the fact that neither AES nor SERPENT are any more or less secure.

(cont)

>all modern encryption algorithms will be weakened significantly, although I doubt its intentional in most of them.
Of course. Absolute hardness doesn't work, so all functions relying on being confusing will eventually be weakened by sufficient amount of effort into making them less confusing. That's why we keep replacing our algorithms. It's not that the people in the 70s were worse at mathematics, it's just that we analyzed those algorithms to the point where they're no longer confusing.

In general, the security of any system in practice depends on how many eyes are trying to break into it. The same is true for cryptographic primitives, operating systems, etc. Something nobody is interested in analyzing will have fewer incentives for people to find those weaknesses.

That's the only reason why people use SERPENT or TWOFISH or whatever these days when they don't trust AES - it's not because SERPENT is somehow intrinsically more secure, but it's simply because AES has gotten a *LOT* more attention from people trying to break it. But if SERPENT was the AES winner, the reverse situation would have been the case.

>>turning a 50% of knowing kernel address space layout into a 99% chance isn't valueble for an attacker
Except in the absence of entropy, they have a 99% chance of guessing the kernel location, because there's no entropy to randomize it with - while with RDRAND, they have a very, very low chance instead. (A statistical bias of the kind that would be slight enouogh to evade detection will probably only give you information about like 0.001% of the bits at best)

>You can tell if the results of RDRAND are random, too.
you're wrong

Intel RDRAND = AES256(NSA_KEY ^ counter ^ HASH(CPUID) ^ (Timer in seconds))

Feel free to generate however many gigabytes of random data you want and pass them through all the known statistical tests (chi-square, arithmetic mean, monte carlo, serial correlation, etc.) to your heart's content.

Guarantee you it will pass them all, making it as cryptographically strong as the best of the CSPRNGs that you're currently trusting.

>fuckers paranoid of NSA Backdoors
I'm more afraid of rubber tire cryptography lads.

>Guarantee you it will pass them all, making it as cryptographically strong as the best of the CSPRNGs that you're currently trusting.

Which is not strong if its generated using a key known by someone and a predictable sequence you fucking retard.

exactly

Do not store keys inside your head and you are safe.

If this was the case, then you'd expert duplicate cryptographic streams for PCs initialized to the same hardware time

how do you store keys then?

on a USB stick you can flush down the toilet if the feds come knocking?

No, your keys are safe. YOU are not safe.

>implying that the NSA and the CIA are honest to don't fake a case
The problem isn't the backdoors. Its the intel agencies.

Then get a citizenship of a country that has amd agencies.

Stupid fucker, I lol'd hard.

You really are a dense and dumb faggot.

Still trying to find a good price on an x86 palmtop, only have a 320LX currently

>RDRAND takes no input, so it clearly can't depend on victim-provided data

Only an idiot would think 'RDRAND is a backdoor' somehow means RDRAND magically detects crypto code and turns it off without telling you. The idea is that RDRAND can almost certainly be compromised by a state actor and it's impossible to detect if this has occurred.

The RDRAND microcode scan could for specific conditions (say a 512-bit number stored in r8-r15) and efuse off the DRBG refresh. Or a service running on the ME could be responsible for initializing the hardware entropy source, and might simply switch the ES to output a pseudo-random test vector when certain packets are detected on the integrated NIC. Or, to flip things around, the ES might just be poorly designed and could leak information out of every side-channel imaginable (not a fixed cycle count instruction, runs off the same power source as the rest of the processor, the bandgap they use as entropy source might be literally right next to the bandgap they use for package temperature sensing).

A backdoor doesn't mean every machine ever is compromised immediately, it means a specific machine may be compromised easily by an individual who knows about the backdoor.

>To me, “random” means “we're not capable of detecting a pattern”, which RDRAND is.

Oh you're just an idiot. Well, you won't find a pattern in Dual_EC_DRBG, so why don't you go ahead and switch all your systems to use that for me? Who needs an entropy source when all you need to be random is 'the absence of an easily discernible pattern'?

The issue isn't 'is RDRAND a source of good statistical randomness', it's 'is RDRAND a source of trustworthy cryptographic randomness'. The answer is no. That doesn't mean you can't *use* RDRAND like Linux does, as one of many entropy sources, but it does mean you shouldn't use RDRAND to replace /dev/random as FreeBSD once did.

>The RDRAND microcode scan could for specific conditions (say a 512-bit number stored in r8-r15) and efuse off the DRBG refresh.
So could the microcode for any other instruction of your choosing.

I don't get why you trust 999 of intel's instructions but not the 1000th. It just seems silly and naive to me.

>Well, you won't find a pattern in Dual_EC_DRBG, so why don't you go ahead and switch all your systems to use that for me? Who needs an entropy source when all you need to be random is 'the absence of an easily discernible pattern'?
How is that even remotely the same fucking thing at all? The issue with the design of Dual EC DRBG is that it's trivially possible to pick a magic number that contains a backdoor. This is not mere speculation, it's easily demonstrable for constants P, Q of our choice.

In other words: We can demonstrate and measure the bias, thus proving it's not random. Now go do the same for RDRAND.

I recommend you getting a XT clone and you will learn assembly in notime

>x86 palmtop
They don't really exist, unless you mean subnotebooks or modern atom ones

>AT&T
Put a bullet through your brain

What exactly are the MONITOR and MWAIT instructions used for?

>AT&T Syntax
>GAS
kys

I like this one a lot

en.wikibooks.org/wiki/X86_Assembly/Introduction

I started with this challenges.re/ together with a copy of the x86 reference manual + a HTML instruction list

Intel doesn't want to break compatibility with all the old software that relies on x87.

heh

I know, but I want a dos one like the dude i quoted, which is x86

why did 8086 take off when it was retail @ ~ $600
and the 6502 was retail @ ~ $25

Because IBM.

Intel and Microsoft would not be where they are today without IBM.

mov eax, 1
int 0x80

Fixed

thank you

i believe gates was given the deal because he had family within IBM

was 8086 any better than 6502

i think not 6502 was
less complex

it took gates a long long time to develop 1st gui (= win 1) .. they had no experience at all vs xerox

it's all they ever achieved, really

now it's a mess

Here a example

just a photo

"Assembly language step by step Jeff Duntermann"
Great book however only 32bits, 64bit isn't that much extra

heh

2 thread synchronization

>tfw code monkey

because they were in totally different leagues from one another, the 8-bit 6502 was incredibly weak even by 1970s standards and geared more towards home computers and embedded systems like terminals and controllers while the 16-bit 8086 could address far more memory and was more powerful in general

the 8086 and 8088 were also not at all that expensive especially to OEMs who could get them for basically pennies by the tray, and by the time clones and hell the PC itself were a thing the 8086 was already old news and had long since decreased in pricing

in addition to that, the 8086 also had the low-cost 8088 option which allowed IBM to reduce costs and complexity of the PC by using more inexpensive and plentiful 8-bit supporting logic, something the 68k wouldn't allow them to do, while still being performant enough to make a difference

lastly, the 6502 /did/ take off, it was wildly popular, but in the ultra low end, and it died in "everyday computing" because that's where it stayed, while Intel and the PC platform as a whole continued to push forward and evolve, Commodore and all the other 6502 users kept pushing the same dying platforms that could barely run their own software based on the same exact slow-ass 10 year old '70s relic that people only bought on the price tag, and by the end of the decade it was pretty hard to justify buying one new when for a few hundred more you could get a nice XT clone that absolutely roasted it in pretty much every way imaginable

Any site that will give me a speed up overview of X86-64? I already know x86-32 sort of.
Also, is yasm better than nasm?

>push/pop (just sub/add [R/E]SP)

I'm not knowledgeable enough to dispute this, nor am I trying to, but out of curiosity:
Why would you want to fiddle with the stack pointer yourself rather than letting push/pop do it?

Would you also be opposed to call/ret instead of jumps with push and pop for the same reason?

>You can tell if the results of RDRAND are random, too.

No you can't. It looks like you don't know the difference between the concept of "random" versus "pseudo-random".

There is no test for randomness, because such a test is theoretically impossible.

We do have tests for pseudo-randomness, but they are based simply on collecting and interpreting statistics, so they will always subject to interpretation about what a "safe" margin of error is, and even whether the tests themselves are sufficiently broad enough to make a "safe" conclusion.

But that's not really the issue here. RDRAND could pass all the known statistical tests for pseudo-randomness with a 10-sigma confidence, but still could nevertheless be rigged by the NSA with a weakness that's not detectable by the current generation of PRNG-analysis tools. Even an independent audit of the entire processor design couldn't "prove" that RDRAND has not been deliberately weakened, because there would be no way to know if you had been given a falsified set of documents.

>push/pop (just sub/add [R/E]SP)
That would be disgustingly bloated.
One or two byte opcodes vs 8+ bytes.

The 6502 was a LOT cheaper than any of Intel's competing offerings for it's whole life span. When it came onto the market in 1975 at $25 each when bought in trays Intel had to lower the price of their current chip, the 8080, from $200 to $100 when bought in similar quantities. For scale, the Intel 8088 only became available in 1980 and then it was introduced at $125 each when bought in trays. The 8086 and it's derivatives never even came close to the 6502 in price.

The reality is that the 6502 and 8086 derivatives were competing in two almost completely different price categories. Comparing the two is thus completely pointless unless you're looking at bang for the buck, in which case the 6502 absolutely pummels the 8086 and it's derivatives.

As for cheap IBM XT clones, at the point they became publicly available the 6502 had gone away in all but the absolute lowest end of the market. What XT clones actually competed with were 68k machines like the Atari ST, Amiga and early Macintosh:es.