/dpt/ - Daily Programming Thread:

What are you working on, Sup Forums?
Previous thread:

Other urls found in this thread:

atmel.com/images/atmel-9164-manchester-coding-basics_application-note.pdf
twitter.com/AnonBabble

Why are most scripting languages dynamically typed?
Pure lazyness or are their creators mentally disabled?

Looking through the boring parts of my language

>Looking through the boring parts of my language
tell us more

I have a personal project from I've been working on for a few years. I'd now like to release it on github

... except the name of the project, the comments, the filenames, the commit messages, are not just politically incorrect for special snowflakes but possibly career-ending.

Is there any way I can automatically find and manually replace all offensive contents in all the 1100 commits? Natural language processing? Dank recurrent neural networks magic?

make them cuter
also post an example of your language

>Go to university
>graduate with CS degree with emphasis on software engineering
>get job as junior dev for company
>learn shit ton more about software engineering than I ever did in school

College is a fucking scam.

Could Sup Forums show me an example of recursively making user like user1 user2 etc in a bash script where the starting number is inputted by the user?

except employers know that you are, at least, able to hold a 3 year commitment, more or less

now, US college, yes that's a scam since first world countries' tuition is dozens of times less expensive

this sounds like the shittiest assignment ever

More voronoi tinkering.

It is

or, in some cases, totally free

>hur, not free cuz u pay them taxes
you pay taxea AND mensality, only pay taxes so fuck off

also, what you learn in college is basically what you put in, if you only study for exams, yea, you wont learn shit, but if you spend your time going after something on your own interest, you will learn.

Then the exams need to cover more stuff for the degree to be worth anything

you really screwed yourself on that one, huh?

>not posting an anime image just to be a rebel and edgy
act like an adult OP

How to improve this Sup Forums

Here, have an anime image. Purely symbolic, of course.

what are you trying to do?

use another language

Plot a sine

>wanting OP to post an anime image just to be a weeb, degenerate and pedo
act like an adult user

nvm I just screwed up the resolution

>falling for the degeneracy meme
learn to think for yourself user

Anyone know where I can get some basic tutorials on I/O kit driver development? I don't want to go too in-deoth like Apple's reference since I'll just be porting a small linux driver to macOS.

It's true, though. I proved it through eternal self-torture by watching garbage animuh and came to the conclusion that it was made for reality denying aloma snackbars and closet traps like you.

>closet
but i am not in the closet :3

>self-torture
so you watched it? and you wanna say you hate it, silly self hating weeb :P

>so you watched it? and you wanna say you hate it, silly self hating weeb :P
Purely for science, of course.

If I wanted to get my company onto Lisp what would be the appropriate dialect?
They all seem kinda clunky and unpolished

mov rdi, [mind]
call blow

Clojure or Common Lisp

>They all seem kinda clunky and unpolished
They are. After all, they don't bother with the fine details of syntax or semantics.

that said common lisp is probably the best choice of all miserable choices

Clojure, the rest are in the maintenance mode.

>mfw WinAPI

Should I use C, C++ or java for desktop programs on linux? Or something else?

Qt

Depends.

I just want to learn how to make GUIs.

We are making a web browser

Not-making, looks more like.

Still depends on how much performance you need. PyQt seems pretty popular if you don't mind the big dependencies, (t)tk isn't too uncommon, it's definitely smaller than Qt and you can use it with most common languages (I use it in small-ish per script for example - but if you need speed, you can use it from C), of course there's gtk, which may be still the most common one on linux (though I haven't tried it; some people say it's a clusterfuck).

Can someone explain please?

Working on the other side now, but its software is a lot more convoluted. I can't just sample the input because the receiver is putting out noise all the time, so I need to make sure the timings are right or reset the data buffer.

void send(uint8_t data)
{
const uint16_t T2 = 400;
const uint16_t T = 200;

set(TXPIN);

uint8_t i;

for(i = 0X80; i; i >>= 1)
{
if(data & i)
{
clr(TXPIN);
_wait_us(T);

set(TXPIN);
_wait_us(T);
}
else
{
set(TXPIN);
_wait_us(T);

clr(TXPIN);
_wait_us(T);
}
}

set(TXPIN);
}

atmel.com/images/atmel-9164-manchester-coding-basics_application-note.pdf

And the decoder side

I still have a lot of TODOs here

ISR(PCINT2_vect)
{
diff = TCNT1 - timer;

if(bits == 8)
return;

if(q == 1 && in_range(T2,diff))
{
data |= (PIND & PD0);

timer = TCNT1;
q++;
bits++;
}
else if(q == 2)
{
if(in_range(T,diff))
{
timer = TCNT1;
q++;
}
else if(in_range(T2,diff))
{
data

Shill when you have committed a single new line of code. [spoiler]So never post again[/spoiler]

>some people say it's a clusterfuck
I tried once and didn't like it and find it's a clusterfuck. I have played with Qt and it's quite easy to understand and achieve something with it, like swing library in Java. (I was doing Qt with C++, not python btw)

rm -rf .git
then delete all comments and change the project name

...

Who's the retard making these threads before the bump limit and not linking it?

write a mathematical proof that programs are proofs

Finally settled on an implementation of Functional Directed Acyclic Graph that allows me to describe it with clarity. Rewrote my instruction analyzer in it and extended it to be able to determine whether a 'mov' instruction is a load, store, load immediate, store immediate, reg to reg, etc.

Here are the top twenty instructions used in Ubuntu 16.04's bash executable:
mov (reg) 11.13%
mov (ld) 11.08%
call 8.56%
mov (imm) 7.24%
test 6.51%
je 6.37%
cmp 5.35%
jmp 5.07%
nop 4.90%
mov (st) 4.44%
xor 3.89%
jne 3.64%
pop 3.32%
add 2.79%
mov (sti) 2.64%
push 2.58%
lea 2.42%
sub 1.47%
ret 1.35%
and 0.85%
or 0.49%
jle 0.49%
xchg 0.38%
jbe 0.27%
ja 0.23%

...

>he doesn't know what `isomorthism` means

define proof
define program

#define PROOF
#define PROGRAM

use std::collections::{LinkedList, VecDeque};
use std::iter::FromIterator;

fn is_symmetric(it: I) -> bool
where
T: PartialEq,
I: IntoIterator,
::IntoIter: DoubleEndedIterator,
{
let mut it = it.into_iter();
while let (Some(l), Some(r)) = (it.next(), it.next_back()) {
if l != r {
return false;
}
}
true
}

fn main() {
let asym_odd = vec![5, -1, 1, 2, 3, 4, 5];
let asym_even = [1, 2, 3, 4];
let sym_even = LinkedList::from_iter(vec!['a', 'b', 'b', 'a']);
let sym_odd = {
let mut t = VecDeque::from_iter(vec![1.0, 2.0]);
t.push_front(0.0);
t.push_front(1.0);
t.push_front(2.0);
t
};

assert!(is_symmetric(&sym_even));
assert!(is_symmetric(&sym_odd));
assert!(!is_symmetric(&asym_even));
assert!(!is_symmetric(&asym_odd));
}

akari is cute

dumb akariposter

Polymorphic code is aesthetic.

...

Each node in the DAG performs a function, either filtering, transforming, or statefully collecting. This allows me to describe a data processing network to run multiple analyses on a dataset at once. No need to cache or do multiple passes over the data. Each disassembly dump isn't very large, but imagine if I run this over the concatenation of several program's dumps.

I hope this is easy to follow if you know what is going on. '>>' obviously means that the operations occur sequentially. '+' means to concatenate the data streams. Input for the DAG is fed into the input node, and the output is collected at the output node, which in this case is a Reduce node.

def make_x86_count_processor():

# input node
input = Input()

# extract only assembly lines
assembly_re = re.compile(r'^\s*[0-9a-fA-F]+:\s+([^

Is >> actually a thing in Python or is this just pseudocode?

[/code#![feature(slice_patterns, advanced_slice_patterns)]

fn is_symmetric(arr: &[T]) -> bool {
match *arr {
[] | [_] => true,
[ref l, ref inner.., ref r] if l == r => is_symmetric(inner),
_ => false,
}
}

fn main() {
let asym_odd = vec![5, -1, 1, 2, 3, 4, 5];
let asym_even = [1, 2, 3, 4];
let sym_even = vec!['a', 'b', 'b', 'a'];
let sym_odd = vec![2.0, 1.0, 2.0];

assert!(is_symmetric(&sym_even));
assert!(is_symmetric(&sym_odd));
assert!(!is_symmetric(&asym_even));
assert!(!is_symmetric(&asym_odd));
}
]

Rewriting IDA Pro in Rust.

Why should I stop using OOP languages?

I want to learn Haskell for the sake of being a better programer, but I don't really see why Functional programming is any better than OOP.

Will it not cost a gorillion dollars?

How do I find where a specific static or shared library is at runtime in C/C++?

You start by removing C and installing a non shit language.

>You start by removing C
But then I can no longer use literally any OS.

Use Kotlin faggots.

>using an OS
>not using a freestanding implementation of IRB
>ishiggydiggy

no

It's not so much that object oriented languages are necessarily bad, but the design methodology of object oriented programming (or any other up front design methodology) doesn't work. If you use a language that makes it easy to write your code first and abstract later, you'll have a good time. Haskell is a language that definitely prioritizes this. Another thing - object oriented languages tend to lack useful and cutting edge features that functional languages thrive on.

>write your code first and abstract later
this is a bad strategy

No, it's not. Approach software design like a compression algorithm - only abstract when you see duplication.

API design for in development software that others are still supposed to be able to use is a different thing, so we don't go down that tangent again.

>Approach software design like a compression algorithm - only abstract when you see duplication
this is also a bad strategy
>API design for in development software that others are still supposed to be able to use is a different thing
if this is not what we are talking about then we are not talking about real programming

>API design for in development software that others are still supposed to be able to use is a different thing, so we don't go down that tangent again.
And actually, now that I think about it, the same technique works here, as long as you don't delete abstractions when they are superceded (except with major version changes or whatever).

Elaborate and/or fuck off with your No True Scotsman bullshit.

>OOP methodology doesn't work
>The functional approach is a bad strategy

Here we get into "what's better for me" arguing. Generally, I agree with the philosophy of functional programming simply because looking at things 'top-down' and so forth is less for me to think about.

Yet the argument that one language is any better than the other based on human fallacy is extremely subjective.

What reason do people have to say that functional programming 'is the future of programming'? What objective advantages does it have to a system with side effects?

What I do next
0-4 IO Interrupts
5-9 ADC

>What objective advantages does it have to a system with side effects?
Ability of the programmer to quickly reason, ability for the compiler to aggressively optimize, etc.

You can have FP with side effects, but purity or managed purity is better

>the same technique works here
it basically doesn't
if you try to do things this way when writing code cooperatively then the people you are cooperating with get mad
and if you try to do things this way when not writing code cooperatively then on one hand it will work well but on the other hand you are then not really writing code

*managed effects

>if you try to do things this way when writing code cooperatively then the people you are cooperating with get mad

>No True Scotsman
but it's true though, no real programmer programs alone, whether for others or not
if you do this then you are a giant child playing with toys
only teams actually do adult work and accomplish things, and teams have to agree on everything constantly

>bucket-of-crabs.jpg

I don't know about you but I don't consider code monkeying around with retarded CS grads "real programming".

Ability of the programmer to quickly forgot that this variable is also modified on another file, and introduce bugs
Ability for the compiler to deal with references everywhere instead of just caching results.

you can call it whatever you want but it's literally the only "real programming" there is, and if it's not the kind of programming you do, you don't know what sacrifice is
you get to EITHER be a unique and intelligent individual and do everything the clever and perfect way OR create and improve on things that actually meaningfully help people and are respectable achievements
there's no overlap

I should stop teaching people about this software design methodology because not enough people know about it already? Makes sense retard.

what ARE proof and program though?
null?

Isn't one of the defining traits of a scripting language to be lazy?

Is the only correct answer. Also consider not being a retard next time lol

>not enough people know about it already
That's not the issue. The issue is that the design methodology you preach only works well for personal projects and resources for public use, and when you're cooperating with a tightly knit team--which is the only scenario where you actually deserve to call yourself a programmer--it just pisses everyone off. In other words, the issue isn't that your design methodology isn't well known enough to work on a cooperative scale, the issue is that it doesn't work on a cooperative scale whether people know what they're doing or not.

You really think you can make a perfect structure and then code without changing it ?

The cycle "implement feature -> refactor " is what works best. No over-engineered architecture, no conception of modules that will never be implemented.

Godbless image.

Could someone explain what this is?

JavaFX or C++ Qt

>You really think you can make a perfect structure and then code without changing it ?
Whoever said anything about never changing it? That's beyond the scope of the question. API should come first, and then implementation, because that's the way it needs to be if you're working with other people, and you should be working with other people or else you're a neet. That doesn't mean API can't come again later, no one ever even said anything about that.
>The cycle "implement feature -> refactor " is what works best.
Not for teams. And for the umpteenth time, if we're not talking about teams, we're not talking about programming.

>it pisses everyone off
I'd be much more pissed off if I wrote my code to someone else's interface only to learn that they fucked up and need to change the interface. If I write my code to their implementation and follow their compression scheme, there's no chance of this happening.