C is deprecated

#include "stdio.h"

main() {
int a = 123;
int* b = &a;
float* c = (float*)(b);
printf("a: %d b: %d c: %d\n", a, b, c);
printf("int ptr cast to float ptr, dereferenced: %f\n", *c);
float d = (float)(*b);
printf("int ptr dereferenced, cast to float: %f\n", d);
}


output:
a: 123 b: 2293304 c: 2293304
int ptr cast to float ptr, dereferenced: 0.000000
int ptr dereferenced, cast to float: 123.000000


>c cucks will defend this

from:
yodaiken.com/2017/06/26/the-c-standard-versus-c-and-the-mother-of-all-hacks/

Other urls found in this thread:

h-schmidt.net/FloatConverter/IEEE754.html
commons.wikimedia.org/wiki/File:Dennis_Ritchie_(right)_Receiving_Japan_Prize.jpeg
creativecommons.org/licenses/by/2.0)],
en.wikipedia.org/wiki/Fast_inverse_square_root
yodaiken.com/2017/06/26/the-c-standard-versus-c-and-the-mother-of-all-hacks/
twitter.com/SFWRedditVideos

>if i do something super obscure and pointless i can trigger a glitch

WOW what a scoop. did you know javascript also does weird stuff with variables? there's a funny video on it called "wat", it will back you LOL out loud!

What's the problem?

>nice b8
kill you're self

standard javascript parlance for forcing integer variables to remain integers is to reassign them like so:
num = +num | 0 // works up to 32-bit integers
num = ~~num // works for 64-bit integers

What's wrong? This is all right as far as standard is concerned.

Can't spell cuck without C

I'm sorry

This, C stands for crippled anyways

the c obviously stands for cool

cancer*

I'm not sorry

Congrats on the stroke

> expected thing happened
what have you expected to happen?

For both to be the same?

First one takes those 4 bytes with 25 leading zeros and treats is like float. Second actually casts integer into float, it's no longer the same in binary.

Oh shit. I'm sorry. So it treats it like a value to small to represent in six decimal places?

not an argument

>Hurrr, why doesn't my undefined behaviour act like I expect it to?
Fucking idiot.

Do you know how floats work? For 32 bit float:
1st bit is sign (0 is +, 1 is -)
8 bits are exponent starting from 2^-127 (all zeros as in your case) to 2^128 (there is some special rule I did not know about when it's full zeros)
rest 23 bits are "mantissa", it has implicitly leading one with decimal mark and those 23 bits are fractional part.

overall it's very similar to scientific notation, this looks like a nice link to read about it
h-schmidt.net/FloatConverter/IEEE754.html

so the 123 treated as float is 1.72E-43

I'd just like to interject for a moment. You appear to be using an element from this image in your post: commons.wikimedia.org/wiki/File:Dennis_Ritchie_(right)_Receiving_Japan_Prize.jpeg
You may have done this accidentally, but you've violated the terms of its copyright license. This is a serious offence and I hope you take it as seriously as it deserves.
Not to say that you're not allowed to create derived works from this image, you are, but this picture is released under the Creative Commons Attribution 2.0 Generic license. Therefor, you are free:

to share - to copy, distribute and transmit the work;
to remix - to adapt the work;

Under the following condition:

attribution - You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).

In this case, the attribution requirement is resolved simply by including the following in your post:

By Denise Panyik-Dale [CC BY 2.0 (creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Now that you have read this, I hope you have a better understanding of your rights and obligations when remixing and sharing this work. They will let you edit, now be nice and credit~

not the first time seing this, is it some meme, bot or serious tryhard person?

>retards playing with casts
>if I point a gun at my foot and pull the trigger, I get a hole in my foot, guns are deprecated!

Code in OP is whole new level of retarded. I thought people working with computers, especially those who want/need to write in C are supposed to know at least a little bit of how they work. How do you even use floats in C without having any clue at all about IEEE754?

I used to in first year of uni. that's why it occurred to me that its a number to small to express in just 6 decimal places.

Reinterpret_cast from Seeples is a lot harder to shoot yourself in the foot with though, assuming you are doing the same things with it.

Often you're not though, and you have too keep the object's destructor in mind...

Why wouldn't the cast fill the bits starting from the least significant bit? In the first example at least

Or is the cast and dereferenced combined producing undefined behavior

And then there's this shit:

en.wikipedia.org/wiki/Fast_inverse_square_root

I guess because int and float are the same size.

I'm more impressed by the guys doing ioccc, they really have their shit together.

>reminding someone to respect the photographers who respect your freedoms is a meme

I figured it out.

The mantissa of a floating point bit representation doesn't start from left to right I.e the smallest bit is not 2^0.

So when you deference and cast at the same time, you're telling the compiler to look at the binary representation of an int as a float, and therefore resulting in some number with a really large negative exponent.

here's another example to perhaps better illustrate what the problem is here, since most of you missed the point, dear friends

#include "stdlib.h"
#include "stdio.h"

main() {
float* data = (float*)malloc(sizeof(float) * 2);
*data++ = 123;
*data = 456.789f;
data--;
{
int a = data[0];
float b = data[1];
printf("a: %d b: %f\n", a, b);
}
{
int* a = (int*)data;
float* b = data + 1;
printf("a: %d b: %f\n", *a, *b);
}
}


output:
a: 123 b: 456.789001
a: 1123418112 b: 456.789001


most of you didn't read the article I linked.

this is broken by design compiler behavior.

c is deprecated.

lpbp
what do you suggest instead of C, user?

Savage.

Rust

Go

agreed, my language of choice today to replace C++
[spoiler]I never coded a single line of C++ or Rust[/spoiler]
has a very C#ish feel it
it's a very nice language
although clearly made for pajeets

lol then there was this nigga

lol

Basically similar results in Rust:
fn main() {
let a: i32 = 123;
let b: *const i32 = &a;
let c: *const f32 = b as *const f32;

unsafe {
println!("a: {} b: {} c: {}", a, b as i32, c as i32);
println!("int ptr cast to float ptr, dereferenced: {}", *c);
println!("int ptr dereferenced, cast to float: {}", *b as f32);
}
}

Output:
a: 123 b: -842457756 c: -842457756
int ptr cast to float ptr, dereferenced: 0.000000000000000000000000000000000000000000172
int ptr dereferenced, cast to float: 123


The value of b and c is different for me than for you because my machine != your machine, and the second line is different because you truncated it. Otherwise, this is basically how casting works. When you cast a pointer, you do not change the value of that pointer. So when you decided to cast the int pointer to a float pointer, you're just reinterpreting the same bytes at the same address. When you instead convert int to float, rather than reinterpreting bytes, you are performing a change to the data. There are times where either behavior is desired, and so both are an option.

As for the article you linked... relying on the behavior of an overflow is bad and you should feel bad for considering it.

There is nothing wrong with the behavior of this program. Again, we see the same results in Rust.

fn main() {
let data: [f32; 2] = [123.0, 456.789];
{
let a: i32 = data[0] as i32;
let b: f32 = data[1];
println!("a: {} b: {}", a, b);
}

unsafe {
let a: *const i32 = data.as_ptr() as *const i32;
let b: *const f32 = &data[1];
println!("a: {} b: {}", *a, *b);
}
}

Output:
a: 123 b: 456.789
a: 1123418112 b: 456.789

Holly shit, it does exactly as one would expect. C really is deprecated piece of shit language

Rust considered deprecated.

Go on and port every fucking ASM to JavaScript then, hipster faggot

>pointer magic shit does magic shit
So the pointer cast doesn't autoconvert the underlying value. Whoop-dee-fuckin-doo.
This is like that 4 star programming garbage. Who the fuck actually does this outside of IOCCC/party trick programmers?

>explicitly use 32 bits of integer data as 1 sign bit, 8 exponent bits, and 23 mantissa bits without casting the value
>WOW LOOK GUYS IT'S NOT CONVERTING IT
What the fuck is wrong with retards like you?

>Reinterpret IEEE754 floating point value as 2's complement integer
>Get garbage from printf
>HURR C IS BROKEN GUYS

Fuck off pajeet.

For the fun loving: *data++ = 123 increments the pointer after applying the value. Makes perfect sense to me. Only why would someone print random memory at int *a = (int*) data. So fucking stupid.

And what language do you recommend the world use that is not C or Rust (or C++, since it has the same semantics as C for this) for systems programming? Ada? I could probably find a way to get Ada to do the same thing, although it's been ages since I've used it. D or Go? These have garbage collectors and are inappropriate for this task. COBOL? No one should ever use COBOL.

What language do you want people to use, that can fulfill the role that Rust, C, and C++ perform, that does not allow reinterpretation of data by casting pointers?

>casting malloc in C
>casting IOEEE754 to int (most likely 2s complement)
Want to know how i know you're retarded?
Again, this is what should happen. Do retarded shit, get retarded results. I wonder, have you even gone through basic networking class? This shit is like first or second semester stuff.

C is for brainlets.

Then u dunno C.

C is a pointer and you pointed it to another pointer so.. just now the address is stored in a float not an int format in the memory.

In D you derefernced then type case it to a float.

Who the hell mentioned Deadlang?
You may as well tess us what the fucking behaviour in COBOL is; that language is more relevant.

You people are picking apart my admittedly flawed examples and still missing the point:

>It is undefined behavior to cast an int* to a float* and dereference it (accessing the “int” as if it were a “float”). C requires that these sorts of type conversions happen through memcpy: using pointer casts is not correct and undefined behavior results. The rules for this are quite nuanced and I don’t want to go into the details here (there is an exception for char*, vectors have special properties, unions change things, etc). – Chris Lattner

>undefined behavior is undefined
Who the fuck cares?
If you want a party trick language hack in Perl. C will just no-lube your ass.

Yes, and? Why should it be defined behavior? C was designed to work on fucking anything, and so might produce different behavior than what you showed in your examples. However, the behavior that you did show in your examples was pretty reasonable behavior. Any good language (Rust being among these good languages) should do the exact same thing unless specialized hardware requires other behavior occur.

/thread

>unsafe

Dereferencing a raw pointer in Rust is an unsafe operation because that pointer has the potential to be null or otherwise invalid.

Good to know Rust deems them unsafe already. C cucks eternally rectally shattered

>user can't understand pointers

Null is not a pointer, null is the absence of a pointer. This should be reflected in the type system with an optional/maybe type.

Null is not the absence of a pointer, it is a pointer to address 0. This is not always an invalid pointer. For instance, in the case of real mode x86, it is the location of the interrupt vector table. Rust supports the option type and uses null pointer optimization on options of references and boxes, which are by definition, not null. It also supports raw pointers which are null.

It is illogical to use a pointer to zero to indicate the absence of a pointer, especially if a pointer to zero may be valid. You shouldn't use NaN to represent the absence of a float either.

>C is deprecated!
Increasingly self-loathing front-end JavaScript programmer shouts for the seventh time this month.

I don't understand how anybody can like JavaScript

I used it for a while. It's a pretty casual language. It's fun to write in, and you immediately get to see some results. Now that I've gone back to C/C++, I'm much more content of what I've made, though.

>immediate results
The same can be said of any scripting language, but they don't typically become a clusterfuck of bizarre design decisions after you advance past toy scripts.

>It is illogical to use a pointer to zero to indicate the absence of a pointer, especially if a pointer to zero may be valid
It all really depends on the platform. Most operating systems will map the virtual address 0 to an invalid page, so dereferencing it causes the program to blow up, and for all intents and purposes, it's an invalid pointer. It is entirely possible to have a platform where all pointers are valid, in which case, it is impossible to define an object the size of a pointer which contains an invalid pointer. If we are programming for such platforms, it is likely that Rust would either not allow a reference or box to that particular location in memory, or it might not implement null pointer optimization, or most likely, it would not have an implementation at all. In the case of C, there would be different behavior for dereferencing NULL, and a programmer wishing to represent a "not a pointer" would likely use a custom struct for this particular platform.

It is worth noting that the reason why C defines so many things as "undefined behavior" is so that it can run on platforms that are really, REALLY weird. Some platforms have 9 bit bytes. Some platforms might have different sized pointers for code and data. Some have 0 being a valid pointer. It is entirely possible (but slightly improbable) that some future platform might require that some section of memory be "tagged" for whether it is a floating point number or integer number, and ban loading floating point values into integer registers and vice versa. In these such cases, casting int* to float* and then dereferencing would be an error. In most cases, it's perfectly valid, and so the expected behavior is performed (a reinterpretation of bytes).

>Name
>Trip
>Long post
____ ___ _____ __ ____

That was a good post besides the trip

...

sopa de cachorro, uma delicia

I feel vindicated now.

compiler optimizations or something just read the article man here I'll link it again
yodaiken.com/2017/06/26/the-c-standard-versus-c-and-the-mother-of-all-hacks/

Undefined behavior doesn't mean a compiler can't optimize it. It just means it might behave one way on one platform and one way on another. But casting between int* and float* is pretty well defined for platforms that matter. On anything else, there's going to be much bigger differences that will keep you from using the same codebase.

This literally does exactly what it should though.
>take ptr to int
>cast it to ptr to float
>dereference it
>hurr why doesn't it output a float equivalent to my int

It's not the language's fault you're retarded.

Ignoring all of the standard shitposting, well done to you user for reminding people that freedoms come with obligations.

I bet the rest of the cucks on this board wouldn't provide corresponding source for GPL code.

All of these make perfect sense if you know assembly. The compiler makers tried to get C as close to assembly as possible for performance. The problem is that this introduces strange behavior that could be incorrect depending on your system and other rules that seem arbitrary that allow you to workaround some of the other rules. Like, an unsigned char pointer is a magical device that can move uninitialized data correctly, but every other type attempting this is considered 'undefined behavior'.

shouldn't you use void* for that?

tl;dr ints and floats are stored differently in memory. If you simply attempt to use one as if it was the other it will show a complete different number.

This makes sense, you retard. This is also fine, you retard.

The problem is that we have tons of idiots who don't know how to program trying to use the language. It seems like everyone wants C to turn into JavaScript or some shit.

Idiot wannabe programmers...

javascript is fucking disgusting, I don't know why anyone would want anything to look like javascript

Yeah. This is why we use memcpy instead.

It's like nobody knows the standard anymore.

Cause JavaScript devs are everywhere and they work for less. People want to be able to hire little shits and cheat real programmers out of their jobs.

Meanwhile the real programmers are writing the JavaScript interpreters, JITs, VMs, etc to make programming in JavaScript faster and easier.

They read K&R so you don't have to.