/dpt/ - Daily Programming Thread

Will Rust ever catch up?

Other urls found in this thread:

wiki.haskell.org/Monomorphism_restriction
open.kattis.com/problems/yikes
youtube.com/watch?v=I8LbkfSSR58&list=PLbgaMIhjbmEnaH_LTkxLI7FMa2HsnawM_&index=1
twitter.com/AnonBabble

>Will Rust ever catch up?
How can it when it's running slower?

That depends on how quickly C++ can adapt. C++20 looks like it will be a significant improvement. It's still terrible, but less terrible.

bluepill me on rust

In C++, this is just
std::vector vec(6);
long product=1, n=1;
std::generate(vec.begin(),vec.end(),[&n, &product]{return product *= pow(n++,3);});
std::cout

Any truth to this?

SICM in Haskell

welp the engine doesn't make the game, game developer make the game.
game engine is just a tool, a tool user doesn't care what tool it is as long as it does the job.

Eh, to me, just the mere fact that Rust has a centralized package repository and a standard build system is a deal. It has got things right from day 1.

My C++ is less verbose (3 lines) than your example but I won't post it :)

Previous thread:

Sorry, forgot that.

>Eh, to me, just the mere fact that Rust has a centralized package repository and a standard build system is a deal. It has got things right from day 1.

Okay, this is actually a real big plus. Installing C++ libraries is an absolute hell.

With that said, so is building gtk.rs on windows. But that's because of the C dependency. I wish cargo would just install eveverything for you instead of relying on system dll's for GTK.

My solution (not including the couts) is only 3 lines.
How did you do it with 0 lines of code?

seems kinda fucking pointless. Just use a procedural solution.
int out;
for(int i=1; i

Disgusting.
Please write, safe, idiotmatic rust, like so:
extern crate num;

use num::One;

fn cube_product(iter: T) -> ::Output where T: IntoIterator, T::Item: Clone + num::One + std::ops::Mul + Into, ::Output: std::ops::Mul {
iter.into_iter().fold(T::Item::one(), |acc, i| i.clone().into() * i.clone().into() * i.clone().into() * acc.into())
}

fn main() {
println!("{}", cube_product(1 .. 6));
}

rust shitposting has consume him

But then how would I show off my cool knowledge of the standard library?

Based Rust user, is that you? If so, post the smug akari with rose to confirm

No, but this is far from the first time that I've posted ultra-safe idiomatic rust here.

>> Just use a procedural solution.

Doesn't compose well from smaller debugged modules.

For the equivalent of longer iterator pipelines, you end up with either multiple loops that aren't fuzed, intermediate memory allocations, or a giant monolithic loop that is impossible to debug. Lazy iterators on the other hand can be completely non-allocating and any loops are fused into one.

What book(s) did you read to reach this peak performance?

See, this is what I meant when I said that Rust is terribly verbose because of the fucking explicit type annotations on every function. For a lot of things, the compiler could just infer it.

The devs need to make impl trait stable as quickly as possible.

None.
I just had to remove my penis and legally donate $200 per month to Anita Sarkeesian's patreon, and I automatically become good at Rust.

I know this is a joke, but it scares me. There are rust coders who would justify this, and as someone who uses rust I don't look forward to dealing with their shit.

It's like how Java codebases are so often some collection of design-patterned generic everything-doers interacting in some nebulous way.

This is actually scarily representative of some of the Rust I wrote when I was into it for a few months.

This is why I don't rust. The ownership semantics are fine, but the lack of type inference on function definitions is just maddening. Or maybe Julia has spoiled me.

Are you actually serious? Can you begin to imagine how elaborately I can translate this to OOP C++?

So wait -

I have to copy the kareha folder into a new folder each time I want to create a new board?

And then I have to edit my apache config file for each new board?

That doesn't seem right. Who's done this before?

>>/wdg/
or make a proper sysadmin thread

Do I have to optionally replace my brain with that of an uncultured orc, fuck my cousin sister every Wednesday night and live in the south to sleep with cows and horses too?

The bigger concern is that guy might not be joking. I can easily see a bunch of programmers coming from other languages decide that doing things in the most verbose and hard to read manner is idiomatic rather than the most straight forward and easy to read / understand code.

Hey now, invoking corporate job-security OOP systems is taking the gloves off.

But yeah, it's mostly annoying that Rust is powerful enough to do generic programming extremely well, but the type signatures tend to undergo a combinatorial explosion. So at some point you end up giving up on the generic aspect and basically just end up writing Go in Rust.

In Haskell, you just stop writing out your type signatures while prototyping. I wish the type signatures in Rust were optional in debug mode but required for release mode, with a Rust toolchain plugin to just insert inferred generic type signatures into your code automatically just like rustfmt autoindents it for you.

You could at least format it a bit so that ugly mess turns into at least a slightly more readable ugly mess

fn cube_product(iter: T) -> ::Output
where
T: IntoIterator,
T::Item: Clone + num::One + std::ops::Mul + Into,
::Output: std::ops::Mul,
{
iter.into_iter().fold(T::Item::one(), |acc, i| {
i.clone().into() * i.clone().into() * i.clone().into() * acc.into()
})
}

Now that you can actually see all the type information it looks even uglier.

I never realized that I hated the turbofish operator before. I now know that I do.

>Just use a procedural solution
>Or stop being a retard and solve it mathematically
why not both?

constexpr auto result =
[]{
int out = 1;
for (int i = 1; i < 6; ++i)
out *= i * i * i;
return out;
}();

int main()
{
return result;
}


main:
mov eax, 1728000
ret


lol

>I wish the type signatures in Rust were optional in debug mode but required for release mode
Maybe you mean optional for private items but required for public items. Maybe you mean that.

I didn't say it'd be any less ugly. Just more readable.

As for the turbofish... meh it's about the same as any other nested brace [] () {} all really blow when they're nested especially in statements.

C++ has the same problems when using templates.

Lisp has the same sort of problem as a built in part of the language ( ((())))

Wow C++ can do that? That's fucking brilliant. Kinda makes me want to learn C++ after so many years of C.

In Haskell this is just
import Data.Foldable // Never ever use foldl instead of foldl'

cubeProduct = product . map(^3)
//coerces to cubeProduct :: (num t) +>[t] -> t ???
// Fucking monomorphism restriction.

//how about this:
cubeProduct = foldl' (\acc i -> acc*i^3)
cubeProduct :: (foldable t, Num a) => t a -> a
// that's more like it.

=.(^)::()+>[]->='(\->*^)::(,)=>->

That's basically what the original rust code dd

fn main()
{
println!("{}", (1 .. 6).fold(1, |acc,i: i64| i.pow(3)*acc));
}


Which works as long as you ensure the inputs to your function satistfy certain traits. That is basically what makes the "Safe idiomatic" version suggested so ugly.

The point is that Haskell's global type inference will randomly specialize some of the generic types
wiki.haskell.org/Monomorphism_restriction

Which can sometimes cause some hilariously difficult type errors, which magically vanish when you put the correct generic type signature over a function without changing the body.

I need to carry python around on a USB.
what to do

This is the product of the integers from 1 to 5, not 1 to 6. The person who posted the question either quoted the wrong range or the wrong product.

I find it absolutely hilarious that Haskell's monomorphism restriction came up in this specific example.

All languages suck.

kek

i want to find out if any element in a list occurs an odd number of times and remove one of those elements in such a case to make it occur an even amount of times. I accomplished this using 2 for loops in python but I want to know if there is another way to do this with O(n) time or better. right now 2 loops result in O(n^2) time.
l = [1,1,2,2,3,3,3,4,4,5]
count = collections.Counter(l)
odd=[]
for i,j in count.items():
if j%2 != 0:
odd.append(i)
for element in odd:
if element in l:
l.remove(element)

output: l = [1,1,2,2,3,3,4,4]

It could be done in O(n) but it's not simply worth it

THat's not the monomorphism restriction, that's just you using map instead of fmap.

store the item counts in a hash map

O(n) Algorithm for this would basically loop through it once on each item in the loop if it is unique add it to a list with a initial count of 1, if it isn't unique increment the count by 1.

You're implementing a constant time hashmap. Where each unique value is a key and the count is the value.

C++ noob, here.

Is there ANY reason to use smart pointers other than being lazy as fuck?

Is there a way to have emacs to indent the ternary operator like pic related?

If not, how do you style the ternary operator?

Reposting from previous:
Been trying to figure this fucking problem out:
open.kattis.com/problems/yikes

Can I represent them all as parametric shapes in 2d (with parameter t being time), and then just solve to see if a point on the bikes are equal to the points within the circle for all t between two values?

yeah, constexpr is great. and it hardly stops there. you can use basically any C++ code in a constexpr context as long as the compiler can prove it doesn't have side effects (no asm blocks, exceptions, heap allocations, mutable global/external state; all of which make perfect sense, considering), with an exception to allow floating point math. and since the compiler needs to enforce such guarantees, it can mean better feedback as well (if you looped over a larger range in my example such that "out" would overflow, the compiler would tell you so in a compile-time error, since overflow/underflow is undefined behavior, and illegal in constexpr contexts). you can also write constexpr-enabled types/data structures, too. my linear algebra types are written such that you can do things like

class thing
{
vec3 pos;
mat4x4 transform;

// ...

void draw()
{
constexpr auto pretransform =
mat4x4::scale(2) *
mat4x4::rotate_z(degrees(45)) *
mat4x4::translate(20, 50, 0);

transform = pretransform * mat4x4::translate(pos);

// ...
}
};


pretransform is computed at compile time, so at runtime, draw() does one matrix multiplication. potentially not even a full one given that functions like mat4x4::translate() and the like are often mostly constant values even with a runtime value as an argument, so inlining/constant folding could potentially eliminate more operations

even if, hypothetically, you were to use strictly the common set of C and C++ features, except with constexpr added in, you'd still find it very useful. there's no doubt it's one of the single best features in C++

he quoted the wrong range; i opted for targeting the expected answer he quoted

Rate my idiomatic fizzbuzz:
extern crate num;

enum FizzBuzz {
Base(T),
Fizz(U),
Buzz(V),
FizzBuzz((U, V)),
}

impl FizzBuzz
where T: ToString
{
fn fizzbuzz_str(&self, fizz: W, buzz: X) -> String
where W: ToString, X: ToString
{
match self {
&FizzBuzz::Base(ref v) => v.to_string(),
&FizzBuzz::Fizz(_) => fizz.to_string(),
&FizzBuzz::Buzz(_) => buzz.to_string(),
&FizzBuzz::FizzBuzz(_) => fizz.to_string() + &buzz.to_string(),
}
}
}

struct FizzBuzzer
where T: Iterator
{
iter: T,
fizz: U,
buzz: V,
}

impl Iterator for FizzBuzzer
where T: Iterator,
T::Item: std::ops::Rem + std::ops::Rem + Clone,
::Output: PartialEq,
::Output: PartialEq,
U: num::Zero + Clone,
V: num::Zero + Clone
{
type Item = FizzBuzz;
fn next(&mut self) -> Option {
match self.iter.next() {
Some(v) => {
match (v.clone() % self.fizz.clone() == U::zero(), v.clone() % self.buzz.clone() == V::zero()) {
(false, false) => Some(FizzBuzz::Base(v)),
(true, false) => Some(FizzBuzz::Fizz(self.fizz.clone())),
(false, true) => Some(FizzBuzz::Buzz(self.buzz.clone())),
(true, true) => Some(FizzBuzz::FizzBuzz((self.fizz.clone(), self.buzz.clone()))),
}
},
None => None,
}
}
}

trait FizzBuzzable where Self: Iterator + Sized {
fn fizzbuzz(self, fizz: U, buzz: V) -> FizzBuzzer {
FizzBuzzer { iter: self, fizz, buzz }
}
}

impl FizzBuzzable for T where T: Iterator + Sized { }

fn main() {
(1..101).fizzbuzz(3, 5).map(|fb| fb.fizzbuzz_str("Fizz", "Buzz")).for_each(|fb| println!("{}", fb));
}
Why aren't you using the safest language yet, Sup Forums?

>idiomatic
This doesn't mean "Idiotic."

Less segfaults and memory corruption. Just use them, even Stroustroup says you should

>Stroustroup says you should
Go away Bjarne, nobody likes you.

I ask because I was told they have significantly higher overhead.

cc-mode probably has something for that

What, is it too safe for you?

Why can't Sup Forums be good at trolling?

int foo ()
{
class *bar = new class();
throw 10;
delete bar; //never happens
}

Alright, but then you'd say that "What if I just make a goto in my function with a status code for error, and then do a destruction there." In which case you're a degenerate that needs to be purged

>using exceptions
you need to be purged

Because Sup Forums left when the try-too-hard-to-ever-be-funny polsters flooded the board.

>exceptions
AHAHAHAHAHAHHAHA
What a stupid fag.

not true, unique is basically free, everything is done at compile time, you only pay for the inner "delete" operation I guess; shared_ptr is a reference counter, so yea, there's some overhead, you probably shouldn't care tho

Unique_Ptr has basically zero overhead. Shared_Ptr and Weak_Ptr have some overhead due to refcounts, but typically not that much compared to the cache miss that you already get from dereferencing a pointer.

The key thing to be careful of with Shared_Ptr is the destructors. In multithreaded code, threads can race towards zero refcounts, which can trigger an expensive destructor call for the object it holds at seemingly random times.

Just use the Boehm GC if you're writing multi-threaded code imho.

Even if you don't use exceptions, doesn't mean that the functions you're calling doesn't use them

Besides, it's a great way to rewind the stack to relay information with zero overhead
Hell, even if you can guarantee that your code is exception free, for some other reason, you might want to terminate a function call quickly, without going through the rest of the function

Here's my O(n) implementation

> go through list once
> every time you find a pair of values, add them to a new list
> set list to new list

fucks up the order if you have an unsorted array tho, idk if thats important to you

Please don't steal my unique_ptrs. I hate it when newbie programmers hand me null pointers.

...at least that doesn't happen in Rust.

That's not O(n) at all

>> Exceptions.
>> Zero overhead.
Choose one. Error types are much, much faster.

>if i in temp
That's not an O(1) operation.

constexpr shouldn't restrict what you can do in the function. It should purely be a programmer assertion that the function is pure with undefined behaviour if they're wrong. constexpr functions should be able to use side effecting code internally.

my bad, im a retard it turns out

ill pretend its because its 4 in the morning

I'm a huge newb at this and didn't know what a hashmap is. After some reading, it appears the line count = collections.Counter(l) already does this. It iterates over the list l and produces a hashmap with the key being a unique element, and the value being the count. It looks like it will take O(n) time. I've simplified my code but the second for loop is O(n). So my overall function is still O(n^2) correct? Also, looking at my previous post, that last chunk of code was actually O(n^4) right? Formation of the hashmap, 2 for loops, and an in statement.

>>> l = [1,1,2,2,3,3,3,4,4,5]
>>> def test(l):
count = collections.Counter(l) #produces a hashmap of element:counter
print(count)
for i,j in count.items(): #iterates over items in hashmap
if j%2!=0: #if a count is odd
l.remove(i) #remove one instant of that value from my list
print(l)


>>> test(l)
Counter({3: 3, 1: 2, 2: 2, 4: 2, 5: 1})
[1, 1, 2, 2, 3, 3, 4, 4]


Are there any more changes I can make to reach O(n) for the function? I believe I've followed your instructions to the best of my ability but I'm new at this.

DWARF exception handling has zero cost besides binary size unless an exception is thrown (which shouldn't happen normally). SJLJ exception handling is somewhat between tagged unions and DWARF exceptions because longjmp unwinding is easy but you need a setjmp for every "try".

That being said, throwing an exception isn't much better than just terminating the thread/process because you get that anyway if a destructor throws during unwinding, and all unwinding does is free resources that the OS will reclaim anyway. Tracing is the job of a debugger.

Exceptions are not designed for errors that are part of ordinary control flow like failing to find a file. Unfortunately they're still used for this because most languages lack a better method of error propagation (like a monad or Rust's ? operator).

thanks, i'll take a look at this.

In C++, how do you assert in the constructor of a class?
I have a class with private variables with no default constructor.
A MWE setup would be like this:
I have a loader that takes a filename and a checker that uses the file.
I can't edit the loader or the checker, they are in a different library but they just give me a segfault and I would prefer to have a better error message preferably assert.
myclass::myclass(std::string filename) :
loader(filename),
checker(loader)
{
//already too late
}

Do you make a checker class and initialize this before the checker in order to throw an exeption?
Checking the values before creating the class is a bad option as I want to contain it to the class.

I was wrong tho, the built-in Python commands I used aren't O(1) so my algorithm isn't O(N)

If you're incredulous that exception handling can be zero cost, it's because DWARF exceptions work by using information embedded in the executable to look up which exception handling routine to jump to based on the address in the program counter at the time the exception is thrown. It takes a bit of space to store this information but nothing actually happens at runtime if no exceptions are thrown.

Not 100% sure but Counter is based on python's dict implementation so it probably should be O(1) for operations on elements and O(N) for construction.

Just throw an exception in the constructor. It's impossible to use an object if its constructor fails so you don't need to worry about leaving the object in an invalid state.

it looks like if you use in with a dictionary/hashmap, the in operator becomes O(1) whereas it's O(n) for a list.

K I made this shitty implementation that only works if your list is already sorted.
This runs in O(n) for sure, but I'm guessing you want it to work for unsorted too.

b-but exceptions are faster if you never use them!

Learn the difference between using exception handling and throwing an exception at run time.

Yeah, good idea.

This is probably your best bet then.

Implement this algorithm but with a dictionary

>Just throw an exception in the constructor
Shut the fuck up, stupid faggot.
Never
Ever pass unsanitized arguments to a function, and especially a ctor

>0xFEEEFEEE
"Fifi", in French, pronounced the same way you'd say "feee feee" in Englsih, is basically an equivalent of "fag".

And this is one of the many reasons why C++ is fucking garbage.

youtube.com/watch?v=I8LbkfSSR58&list=PLbgaMIhjbmEnaH_LTkxLI7FMa2HsnawM_&index=1

this series is pretty good though I'm still on the fence about drinking the category theory kool-aid

>there are "programmers" in this thread who don't understand the difference between an interface and an abstract class
>they can't fathom the need to distinguish between the two

t. pajeet

>using classes at all

>t. pajeet
At least you admit it.

Take your POO elsewhere, pajeet.