/dpt/ - Daily Programming Thread

What are you working on, Sup Forums?
Old thread:

Other urls found in this thread:

play.rust-lang.org/?gist=96e0887b2648a32d4f53c11c55b44bba&version=stable
doc.rust-lang.org/src/alloc/boxed.rs.html#399-429
queue.acm.org/detail.cfm?id=1531242
web.stanford.edu/class/cs140e/
reddit.com/r/dailyprogrammer/comments/7qn07r/20180115_challenge_347_easy_how_long_has_the/)
en.wikipedia.org/wiki/Interval_tree
github.com/rust-lang/cargo/blob/master/src/cargo/core/resolver/mod.rs
twitter.com/SFWRedditImages

#include
#include
#include
#define OPTARGS 2
#define ZERO 0
#define ONE 1
#define BINLEN 8
#define BEGIN main(int argc, char *argv[])
#define N int
#define CC const char
#define L long
#define ST size_t
#define R return
#define EF else if
#define I if
#define E else
#define F for
#define SO sizeof
#define PF printf
N BEGIN
{
N i;
CC **binaries;
L c;
ST len;
I(argc

>picrel
lemme get that started
>"lisp da best, haskal sux"
>"no u tarded, haskal best, lisp da sux"

Second for C#

please dont do this

nothing like leaking memory

Daily reminder you can start using non-lexical lifetimes right now, it's great:
`#![feature(nll)]`

if let Foo { ref bar } = *foo {
*foo = make_foo(bar);
}

the borrow checker will be trash forever, this is merely a band aid

tbaa is an abomination.

Brainlet here. How the fuck do I compile in Emacs?

You are like a little baby
typedef char C;typedef long I;
typedef struct a{I t,r,d[3],p[2];}*A;
#define P printf
#define R return
#define V1(f) A f(w)A w;
#define V2(f) A f(a,w)A a,w;
#define DO(n,x) {I i=0,_n=(n);for(;it=t,z->r=r,mv(z->d,d,r);
R z;}
V1(iota){I n=*w->p;A z=ga(0,1,&n);DO(n,z->p[i]=i);R z;}
V2(plus){I r=w->r,*d=w->d,n=tr(r,d);A z=ga(0,r,d);
DO(n,z->p[i]=a->p[i]+w->p[i]);R z;}
V2(from){I r=w->r-1,*d=w->d+1,n=tr(r,d);
A z=ga(w->t,r,d);mv(z->p,w->p+(n**a->p),n);R z;}
V1(box){A z=ga(1,0,0);*z->p=(I)w;R z;}
V2(cat){I an=tr(a->r,a->d),wn=tr(w->r,w->d),n=an+wn;
A z=ga(w->t,1,&n);mv(z->p,a->p,an);mv(z->p+an,w->p,wn);R z;}
V2(find){}
V2(rsh){I r=a->r?*a->d:1,n=tr(r,a->p),wn=tr(w->r,w->d);
A z=ga(w->t,r,a->p);mv(z->p,w->p,wn=n>wn?wn:n);
if(n-=wn)mv(z->p+wn,z->p,n);R z;}
V1(sha){A z=ga(0,1,&w->r);mv(z->p,w->d,w->r);R z;}
V1(id){R w;}V1(size){A z=ga(0,0,0);*z->p=w->r?*w->d:1;R z;}
pi(i){P("%d ",i);}nl(){P("\n");}
pr(w)A w;{I r=w->r,*d=w->d,n=tr(r,d);DO(r,pi(d[i]));nl();
if(w->t)DO(n,P("< ");pr(w->p[i]))else DO(n,pi(w->p[i]));nl();}

C vt[]="+{~

It wasn't that bad before, and it's almost unnoticeable now with nll.

When I can transiently move borrowed objects I'll start giving a shit.

defined C is the best eso-lang.

is this just a meme or for what purpose do people unironically obfuscate their code like this?

That is not a meme. It's how array programmers write C.

>transiently move borrowed objects
What do you mean? How would you move an object with a reference to it? The reference would become dangling, and that's the entire point - to make dangling references impossible.

>#define ZERO 0
Are you expecting to change the value of zero at some point in the future?

>not expecting that NULL can be something else than 0
>not expecting that false can be something else than 0
>not redefining int32_t your_prefix_int32_t in case int32_t is not really 32 bit wide

typescript will save systems programming

>that NULL can be something else than 0
null isn't supposed to be 0 or anything else.
Null is quite literally supposed to be "Nothing".
But imperative langs are stuck in the past and don't even have proper Maybe types.

Beginner programmer here, I'm a sysadmin and I'd like to learn python.
My project is to create a small program that logs my gmail and other email accounts one by one and displays the subject and the sender to me sorted by the addresses.
Is this possible on my level or should I start some easier project? Where and how should I start? Thanks Sup Forumsuys

Suppose I had a binary tree in C.
struct tree
{
int data;
struct tree *left;
struct tree *right;
};
How would I write the following operation in Rust?
void insert(int new_data, struct tree *tree)
{
struct tree *old_tree = malloc(sizeof *old_Tree);
// copy existing tree into new struct
*old_tree = tree;
// overwrite old tree with data for new tree
tree->data = new_data;
tree->left = 0;
tree->right = old_tree;
}

With a lot of pain.

Sure, just use the imap module to grab the messages. It's very easy.

Shut your whore mouth.
#include
#include
#include
#define OPTARGS 2
#define ZERO 0
#define ONE 1
#define BINLEN 8
#define BEGIN main(int argc, char *argv[])
#define N int
#define CC const char
#define L long
#define ST size_t
#define R return
#define EF else if
#define I if
#define E else
#define F for
#define SO sizeof
#define PF printf
N BEGIN
{
N i;
char **binaries;
L c;
ST len;
I(argc

>*old_tree = tree;
C-tards can't even write valid C. You can't assign structs outright in your primitive language, m8.

data Tree a = Leaf | Node a (Tree a) (Tree a)
insert :: a -> Tree a -> Tree a
insert x tree = Node x Leaf tree

Not an equivalent, doesn't mutate the value, enjoy your 1Gb/s garbage.

ADTs really are wonderful.

You absolutely can.

>doesn't mutate the value
You still get the correct value through immutable replacing. And no, i don't use hasklet.

You're not very good at this, are you?
$ cat main.c
#include
#include
#include
#define OPTARGS 2
#define ZERO 0
#define ONE 1
#define BINLEN 8
#define BEGIN main(int argc, char *argv[])
#define N int
#define CC const char
#define L long
#define ST size_t
#define R return
#define EF else if
#define I if
#define E else
#define F for
#define SO sizeof
#define PF printf
N BEGIN
{
N i;
char **binaries;
L c;
ST len;
I(argc

$ valgrind ./btt 01110101
==18702== Memcheck, a memory error detector
==18702== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==18702== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==18702== Command: ./btt 01110101
==18702==
==18702== Invalid read of size 8
==18702== at 0x108928: main (in /home/blackcat/bin/btt)
==18702== Address 0x51f0040 is 0 bytes inside a block of size 8 free'd
==18702== at 0x4C2E10B: free (vg_replace_malloc.c:530)
==18702== by 0x108923: main (in /home/blackcat/bin/btt)
==18702== Block was alloc'd at
==18702== at 0x4C2CEDF: malloc (vg_replace_malloc.c:299)
==18702== by 0x108810: main (in /home/blackcat/bin/btt)
==18702==
u==18702==
==18702== HEAP SUMMARY:
==18702== in use at exit: 0 bytes in 0 blocks
==18702== total heap usage: 3 allocs, 3 frees, 1,041 bytes allocated
==18702==
==18702== All heap blocks were freed -- no leaks are possible
==18702==
==18702== For counts of detected and suppressed errors, rerun with: -v
==18702== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)


I have no idea what your whore mouth is spouting now.

>in case int32_t is not really 32 bit wide
What? Isn't it required by the standard to be exactly 32 bits wide?

Struct assignment works, I just forgot to dereference tree.

>this is your brain on C

It's even generic: play.rust-lang.org/?gist=96e0887b2648a32d4f53c11c55b44bba&version=stable
use std::mem;

struct Tree {
data: T,
left: Option,
right: Option,
}

impl Tree {
pub fn insert(&mut self, data: T) {
let mut node = Box::new(Tree {
data,
left: None,
right: None,
});
mem::swap(self, &mut node);
self.right = Some(node);
}
}

Statistician/econometrician here. I want to leave academia and am eyeing data science jobs. I have no issues with the machine learning, model, general statistics requirements listed for most jobs. I am fluent in R and matlab for applied data analysis, know some Python due to having dabbled in pandas, scikit-learn and written a web scraper to get data. I also had database class and know the basics of SQL. Long-time linux user so confident using the cli.

However, I have no formal computer science training. Many jobs require more elaborate programming skills than I currently possess. No real knowledge about idea about proper abstraction, good practices, object oriented programming, algorithms (the list probably goes on). This is less about languages and more about general principles, but fwiw most jobs require python at a level that I probably don't have. It's unlikely that I need to know or use C. Maybe Java, although I doubt it.

What are the most glaring holes and the best ways/resources to fill them in self-study?

You're right, I was mistaken.

>still not freeing each individual strndup'd char* before freeing the array

Rust really needs to drop s

So I can move borrowed values through references, I just need to use the built in swap function to uphold the never-moved-from invariant. Interesting.

Post feet.

lol no, it can be wider

You're wrong. It's int_fast32_t and int_least32_t which can be greater. int32_t must be exactly 32, which is why it's not mandatory.

1. It's not a move, it's a swap, i.e. there's some valid value moving in as the current value is moving out. You can't simply move out value via mutable reference because you don't own it and the owner could try to access it later.
2. It has to be a mutable reference (obviously), which is guaranteed to be the only one, so you can mutate the value freely. Swapping two values is just a form of mutating.

This code also won't compile (not even in C) because the malloc(sizeof *old_Tree) has a typo, where the hell does old_Tree come from, you probably meant: malloc(sizeof *tree)?

Also, except if you don't remember to make a function to destruct all the trees, you are leaking memory. So at the end of your program, you'd have to call cleanup_my_tree(). Otherwise, you just malloc and never free. Rust protects against that with RAII (Box type) and destructors (like in C++).

That said, here is how you'd write it:

#[derive(Clone)]
struct Tree {
data: i32,
left: Option,
right: Option,
}

impl Tree {
fn insert(&mut self, new_data: i32) {
let old_tree = Box::new(self.clone());
self.data = new_data;
self.left = None;
self.right = old_tree;
}
}

No leaks now bitch
#include
#include
#include
#define OPTARGS 2
#define ZERO 0
#define ONE 1
#define BINLEN 8
#define BEGIN main(int argc, char *argv[])
#define N int
#define CC char
#define L long
#define ST size_t
#define R return
#define EF else if
#define I if
#define E else
#define F for
#define SO sizeof
#define PF printf
N BEGIN
{
N i;
CC **binaries;
L c;
ST len;
I(argc

>self.clone()
inefficient.

Anyone with a job knows this

// copy existing tree into new struct


It's the same thing, your C code is just as inefficient.

Did a Markov generator write this

Is clone() not a deep copy?

This is one horrible solution: cloning a Box is a deep copy: doc.rust-lang.org/src/alloc/boxed.rs.html#399-429 , you wouldn't want to copy your entire tree only to add a new root node.
No, the C code copies only the root node.

In my experience with people taking the role of data scientists:
* Not knowing the difference between median and average. (yes, as in i explained to him and he told me he understood now)
* Wasting 10 minutes of my time explaining how to calculate a percentage in their fucking presentation of this week's work to me.
* Completely missing the scope of what we're doing and presenting an interface that amounts to doing single boolean expressions where I was considering doing multivariate analysis.
* Not getting fired because higher ups can't get it in their head that them leaving the company saves me time and is a net gain. Even if we don't want to spend the time looking for new people.

I don't know how to help you convince recruiters though to me it seems they'd accept just about anyone. From my perspective I'd literally want highschoolers over the three stooges I have now. Not even joking. I'm sure there's at least a couple kids in every programming class that could help me more than these three.
They'd probably bring me coffee too.
Look into learning how to kiss ass.

is C++ worth learning anymore?

yes, but only if you know a few other languages

It certainly isn't going away.

Shit, don't add to my disillusion. I have a PhD in stats and several years experience in practical data analysis. I'm just fed up with academia. I just want to fill the most relevant holes from a standard CS curriculum.

Then go through a standard CS curriculum by topic and see what you're missing. Learn that.
It's not gonna be hard.

CS self-study can be pretty difficult. I would just check out r/learnprogramming. The only recommendation I can really make is make sure you have really solid fundamentals before you attempt a complicated project. Also only stick with one good general-purpose language like C#, C++, or Java.

Write tons of programs demonstrating your knowledge of each concept in your language. That's what CS students do. I don't think Python is great for learning after getting ahold of the basics, because you need to learn OOP to get a job, and Python OOP is shit. Also don't waste your time with C or C++ if you won't need to know about memory management. After you learn to program in another language, using Python for your job will be a breeze IMO. Also once you can program I would read Python For Data Science. Sounds up your alley. Good luck user

That is the first draft of the J interpreter written by Arthur Whitney in an afternoon. He is also known for creating kdb, the fastest time-series database known to man.
Here's Whitney interviewed by Bryan Cantrill queue.acm.org/detail.cfm?id=1531242

>What are you working on, Sup Forums?
my raspi3 arrived today.
Working on this now: web.stanford.edu/class/cs140e/

You're overestimating the skills you need. If you have a graduate degree, you'll probably write prototypes in R and Pajeet in Bangalore will hardcore your stuff or shit out a shinyapp for it or something. The problem in data science is that every CSfag thinks because they know how to find means and maybe standard deviations, that they're fucking mathematicians. There is a serious lack of solid statistical knowledge in the industry.

That said, it wouldn't hurt to learn Scala.

t. Fellow datashit

Disclaimer: I know nothing about data science

Currently looking at the Stanford CS curriculum. The issue is not difficulty but time being a limited resource.
Thanks, I also had the impression that learning OOP is something of a priority.
Thanks, that's somewhat reassuring. I'm actually pretty keen on learning more things (just want to get out of the circus that is academia). I presume you mention Scala because of Spark?

General question, as no idea where to ask:

Got any ideas how to swine myself into the programming field? I am learning C/C++ currently, but obviously have no experience.

Would a portfolio of programs etc. help? Or what would be the best angle of attack ?

>how to swine myself into the programming field
well, there's plenty of mud and poo in it, so it shouldn't be too hard

Waiting for Rpi3, Botnet voice AIY kit and NodeMCU to get shipped :(
Have fun user, I'll check out the link thanks.

Seriously though, any tips?

No one here has a job. Read the threads on r/cscareerquestions (hint: use the search bar) and google better. Everyone wants to know how to get into programming. It's the only thing booming right now

I had a test interview for a junior java web dev and they asked questions like how HashMap works under the hood (namely memory management), what initializes first when you initialize an object, why we need both hashCode and equals and how they work together.

Are these questions common for interviews for Java junior developers for web?

It's the sort of thing I would expect, yeah.

I think I have a problem, anons. Whenever I am given a problem, I indefinitely think about creating a data structure for the core problem. Every other programmer thinks about reaching the solution but I must invent a universe for the problem.

For example, this (reddit.com/r/dailyprogrammer/comments/7qn07r/20180115_challenge_347_easy_how_long_has_the/) makes me think about making a structure (for example, `Period`) that understands overlapping and discrete/continuous distance.

What's my cure?

more direct version of obviously does a lot more duplicate work ([leafCnt*treeHeight vs folderCnt] number of unions), compared to the previous trickle down economics approach
but that's what you often get with literal problem statement translations
1. Data types
case class Folder(id: Int, isShared: Boolean, var cows: Set[Int]) {
var parent = Option.empty[Folder]
var kids = collection.mutable.Set.empty[Folder]

def allCows: Set[Int] = (parent, isShared) match {
case (Some(p), true) => cows ++ p.allCows
case _ => cows
}
}

2. Converting the input
def readInput(lines: Iterator[String]) = {
def nextIntsLine = lines.next.split(' ').map{_.toInt}
def getFolders(n: Int, isShared: Boolean) =
for(_

I honestly felt like a retard even though I can write code just fine. Was really disappointed in myself. Is there a book on Java that covers the things how Java works under the hood?

en.wikipedia.org/wiki/Interval_tree

Anyone here well versed in Netlogo? Im desperate enough to pay for skype sessions and help with fixing my simulation.

>tfw spend 2 hours inventing interval tree

that just makes you clever

A C programmer who doesn't know any Java would be able to describe in general terms how java.util.HashMap is implemented without ever having written a line of Java.
There are some specifically Java things there like object initialization and the Comparable interface but the rest is bog standard CS stuff.

What's even the point in asking about memory management in Java?
Isn't the point that you avoid all that by blood sacrifice to the garbage collector?
I know it doesn't work satisfactory to me but are you seriously taking double overhead here?

Scala looks so ugly omggg

Fuck. Thanks for that though.

You haven't seen Rust yet.
use std::cmp;

struct Period {
ti: u32,
tf: u32
}

impl Period {
fn new(i: u32, f: u32) -> Period {
Period{ti: i, tf: f}
}

fn midpoint(&self) -> f32 {
self.tf as f32 - self.ti as f32 / 2f32
}

pub fn interval(&self) -> u32 {
self.tf - self.ti
}

pub fn overlaps(&self, px: &Period) -> bool {
(self.midpoint() - px.midpoint()).abs()
Period {
Period::new(cmp::min(self.ti, px.ti), cmp::max(self.tf, px.tf))
}

I'm not sure specifically what the poster was referring to with memory management, so I'll await clarification.

Doesn't look really bad

>A C programmer who doesn't know any Java would be able to describe in general terms how java.util.HashMap is implemented without ever having written a line of Java.
No that's not it. Even a dumbass knows that Map is a key value store, what they wanted to know was how memory gets allocated, how the lookup works, about the value of loadFactor and why it's 0.75, about buckets and shit.

>how the lookup works, about the value of loadFactor and why it's 0.75, about buckets and shit.
These questions all apply to the abstract data structure You could ask those same questions about std::unordered_map or System.Collections.Generic.Dictionary and you would be able to give the same answers.

I'm still not sure what you mean regarding memory allocation though, that would be more specifically Java I guess.

how does an interval tree relate to

it was in one of the things ilnked in the thing you linked?

It wasn't him

>that
>bad
That's relatively sane compared to what rust has to offer.

let mut resolve = Resolve {
graph: cx.graph(),
empty_features: HashSet::new(),
checksums: HashMap::new(),
metadata: BTreeMap::new(),
replacements: cx.resolve_replacements(),
features: cx.resolve_features.iter().map(|(k, v)| {
(k.clone(), v.clone())
}).collect(),
unused_patches: Vec::new(),
};


self.sources.get_mut(source_id).unwrap().update()
})().chain_err(|| format_err!("Unable to update {}", source_id))?;
Ok(())

>him
first of all, HOW DARE YOU

>not being an user(male)

That's probably a bad data structure you are using, user.
But I wouldn't know, this is my second day on Rust.

how do you manually delete something in java?

those two are literally from the cargo source.
github.com/rust-lang/cargo/blob/master/src/cargo/core/resolver/mod.rs

kek & sounds about right

sure, you can make Qt surfaces in it and your own, better languages