/dpt/ - Daily Programming Thread

What are you working on, Sup Forums?

Previous thread:

Other urls found in this thread:

researchgate.net/publication/222464257_Garbage_collection_can_be_faster_than_stack_allocation
people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf
youtube.com/watch?v=DBXZWB_dNsw
en.wikipedia.org/wiki/Bubble_sort
askubuntu.com/questions/40979/what-is-the-path-to-the-kernel-headers-so-i-can-install-vmware
twitter.com/NSFWRedditVideo

::operator new usually just calls malloc under the hood, but that's how the sepples people roll if you want uninitialized memory.

how do i push the length onto the stack and then pop it back into rdx without it segfaulting? i know that's the cause. it's getting a much bigger number out than i put in
section .data
HW: db "Hello, world",10
HWLen equ $-HW

section .text

Print:
pop rdx

push rcx ; save registers
push rbx
push rax

mov rax,4 ; sys_write
mov rbx,1 ; stdout
mov rcx,OutputBuffer ; define buffer location
int 80h ; system call

pop rax ; restore registers
pop rbx
pop rcx
ret

global _start
_start:
nop
mov rax, HWLen
Dec:
mov bl, [HW+rax-1]
mov byte [OutputBuffer+rax-1], bl
dec rax
jnz Dec
mov rdx,HWLen
push rdx
call Print
jmp Quit

Quit:
mov rax,1 ; sys_exit
mov rbx,0 ; exit code 0
int 80h ; system call

section .bss
OBMaxLen equ 256
OutputBuffer: resb OBMaxLen

Why though?
I was going to answer that it would simplify resource management because you wouldn't accidentally delete a malloc'd block, but then I remembered that you already have delete and delete[].

>call Print
>jmp Quit
>Quit:

Heck I know, this is one of the parts of C++ that wasn't well thought out.

why not modularize your assembly? just because it functions the same as if it was all one blob of code doesn't mean it should be

Maybe it's better style to just never new[] and use vectors or something. Then you can count on delete always working.

If you do not understand why garbage collection is the superior memory management system you should leave this thread right now and come back when you are more educated.

the guy is literally jumping to the next instruction, stalling the processor for nothing. only a fool would accept this.

Yeah, the "modern" way of doing things is to use containers, and smart pointers.

> garbage collection is the superior memory management system

2/10

Unironically leave now.

Which is more valuable, programmer time or machine time?

Pointer bump allocation s the superior memory management system.

i'm 'the guy'. i'm not writing a compiler, i'm writing hello world learning assembly. nice to know what you do and don't accept in hello world programs, but do you know the solution to the question?

Both use less time when using a well-chosen garbage collector.

proofs?

Python performs pretty well tbqh

that's not true though

It's been known for decades GCing is faster for machine time. The current research focuses on lowering the memory overhead.
researchgate.net/publication/222464257_Garbage_collection_can_be_faster_than_stack_allocation

damn who would have thought?

untrue and a gc need 5x more memory

In such useful applications like calculating spectral norm and generating the Mandelbrot set.

>but do you know the solution to the question?
i don't do 32 bits assembly.

spoken like a true C Linux kernel hax0r master

What should I learn as my first programming language?

Should I even bother?

>An old and simple algorithm for garbage collection gives very good results when the physical memory is much larger than the number of reachable cells.
>In fact, the overhead associated with allocating and collecting cells from the heap can be reduced to less than one instruction per cell by increasing the size of physical memory.

>Garbage collection is faster when you never need to collect garbage.

Assembly.

No

business / economics is where it's at

You might want to read the paper and citations before giving your uninformed opinion.

Pick fucking anything. Seriously, it doesn't matter. Once you learn one it's easy to learn another.

you're in /dpt/, the realm of expert FizzBuzzers, don't expect too much.

I believe that C++ rolls its own seperate version of malloc.
But yes, for some reason C++ cares a lot about ensuring that memory is initialized, but it doesn't really give a fuck if a pointer is valid or not or if a const reference is really constant.

i don't like c at all but it is true i have masterized both C and the linux kernel (not entirely but the core modules)

"""
we show that the runtime performance
of the best-performing garbage collector is competitive with explicit
memory management when given enough memory. In particular,
when garbage collection has five times as much memory
as required, its runtime performance matches or slightly exceeds
that of explicit memory management. However, garbage collection’s
performance degrades substantially when it must use smaller
heaps. With three times as much memory, it runs 17% slower on
average, and with twice as much memory, it runs 70% slower. Garbage
collection also is more susceptible to paging when physical
memory is scarce. In such conditions, all of the garbage collectors
we examine here suffer order-of-magnitude performance penalties
relative to explicit memory management
"""

people.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf

Don't know the answer but it might be a good idea to write the same program in C and decompile with gdb, see how C does it?

just realized it's because calling a routine pushes the address of where you left onto the stack, so that's what i'm accessing. thanks /dpt/ couldn't have done it without you

>Should I even bother?
youtube.com/watch?v=DBXZWB_dNsw

it's this
i appreciate the response though. When i disassemble C programs i see a lot of random stuff that throws me off, though i'm sure it all means something

>In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management.
It's a good thing memory capacity is cheap these days.

It's because call push the retur n address in the stack. So you popping the return address. Why not just pass the value by register?

unironically this

some people say that programming is bookkeeping basically

i was trying to make it so it'd require as few registers occupied as possible. i know i'd be able to do a double pop and push but that seems excessive. in literally the next paragraph of the book i'm reading it had an example of exactly what i'm doing and said why it doesn't work

Passing by registers is the way of passing arguments.

Just had to order new servers at work because 8 gigs of ram couldn't accommodate more than four rest services written in java. This is the power of garbage collection.

how would you pass more than a handful of arguments?

How did the meme start that any kind of programming changes are too difficult to bother? Especially related to video games.

>en.wikipedia.org/wiki/Bubble_sort
>Although the algorithm is simple, it is too slow and impractical for most problems even when compared to insertion sort.
Why is bubble sort slower than insertion sort when they both have O(n^2) and Ω(n)?

Do you know how many registers you have on x86_64?

>need to make an app android/iOS
>"just do whatever you want xDD"

anyone got ideas?

Make an app that can detect excessive levels of radiation.

probably cache, cache is magic that makes everything slow or fast

make an app to generate app ideas

Every variables and objects in function get destroyed after return, right?

The only 3 languages that non-brainlets need to know:
>C
>Fortran
>Mathematica

right (in c++)

We are making a web browser! Join the devs at the IRC channel.

>I need to make lists of languages to learn because I'm a brainlet and can't pick up new things quickly

What factor do you think that causes memory leak in c++ (especially in embedded system such as ESP8266)?

code monkeys

memory allocated with new or malloc does not get released automatically, so trace those pointers.

Thanks

You learn those, I'll learn C and asm to code those languages

Make an app that counts how many times you fart

Are there any good primers on doing database stuff+designing schemas? I have no experience with databases but the project I'm working on is going to need one, and I have no idea where to start.

i'm making a program that counts the occurence of each word in this thread
but i'm getting this keys in my dict
{
....
"Sup Forums?Previous": 1,
"thread:": 1,
"": 1,
"::operator": 1,
"usually": 1,
"calls": 1,
"under": 1,
"hood,": 1,
...
}

SQL I assume?

Yes, but I need foundations in database design in general as well, not just how SQL works.

Database normalization

Just make sure you normalize your tables and you'll be fine.

>/dpt/'s recommended programming languages : turning complete unemployment

every time

>just realized it's because calling a routine pushes the address of where you left onto the stack
Read about calling conventions. They're essential when reading compiler generated code.

How come software just never fucking works? Any time I try to do anything with a computer, I follow the installation directions, and immediately run into like 50 problems. Why do you guys make such awful software?

Right on the page:
>Bubble sort also interacts poorly with modern CPU hardware. It produces at least twice as many writes as insertion sort, twice as many cache misses, and asymptotically more branch mispredictions.[citation needed] Experiments by Astrachan sorting strings in Java show bubble sort to be roughly one-fifth as fast as an insertion sort and 70% as fast as a selection sort.

Also, bubble sort always does much more comparisons than insertion sort.

To annoy you. Specifically you. Everyone else has the special code to make things work. But you don't.

It honestly does seem like I'm the one with special powers, to instantly fuck up any piece of software. I should be in QA not development.

we don't make software here

Please don't. We're drowning in legacy tickets as it is and the new project is already falling down in flames (though that's the hardware guys' fault).

>shen
>no foss implementation
no thanks

what a joke

Threadly reminder that all dyanmic languages should have died long ago.

except lisps

Especially lisps.

why?

Because Python and JS exist

Why? It's a great hand hold. Especially when refactoring code.

Lisps are dead anyway.

Wrong.

A strong, static type system is a great hand hold when refactoring code.

Statically typed languages should have died long ago because COBOL exists?

what if you need a job?

Only the dynamically typed languages which inherit from Self (most of them) must be eliminated. Lisp is good, Smalltalk is alright as well.

Let me give you an example.

Why the fuck does vmware not work out of the box with a basic fucking ubuntu guest distro? I try to install vmware tools and I get this error:

>The path "" is not a valid path to the 4.10.0-28-generic kernel headers. Would you like to change it?

I google the problem and get this: askubuntu.com/questions/40979/what-is-the-path-to-the-kernel-headers-so-i-can-install-vmware

I try to install those packages like this: $ sudo apt-get install build-essential linux-headers-$(uname -r)

>linux-headers-... is already the newest version.

Great... Now what? Why does that apparently solve the issue for everyone but me?

Wrote ugly tests for the parser.

Now that I know how to write macros in Racket effectively, they've become a very useful tool. Most of my macros have either been "definer" macros, e.g. a macro for defining AST structs, and a macro for defining lexer keywords. Also, macros for testing, for instance pic related uses a somewhat complex macro that makes it easy to represent fake source locations and automatically constructs the input for the parser, runs it, then checks against the expected AST output.

For those who think Racket macros are trash because "muh hygiene": The $N-src identifiers are bound automatically and un-hygienically. What's nice about Racket macros is that they are hygienic by default, but allow you to fairly easy inject an unhygienic identifier into an appropriate context

What's wrong here?

private class addButtonListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
url = "jdbc:mysql://localhost:3306/workapp";
username = "root";
password = "password";

System.out.println("Connecting database...");

try{

Connection conn = DriverManager.getConnection(url, username, password);
Calendar calendar = Calendar.getInstance();
java.sql.Date startDate = new java.sql.Date(calendar.getTime().getTime());
String query = " insert into running (date, time, distance)"
+ " values (?, ?, ?)";

PreparedStatement preparedStmt;

preparedStmt = conn.prepareStatement(query);
preparedStmt.setDate(1, startDate);
preparedStmt.setInt (2, Integer.parseInt(timeT.getText()));
preparedStmt.setInt (3,Integer.parseInt(distanceT.getText()));
preparedStmt.execute();

} catch (SQLException e1) {
e1.printStackTrace();
}

}
}


Exception in thread "AWT-EventQueue-0" java.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:592)
at java.lang.Integer.parseInt(Integer.java:615)

Why do you use 68 spaces for indentation my dude?

>preparedStmt.setInt (2, Integer.parseInt(timeT.getText()));
>preparedStmt.setInt (3,Integer.parseInt(distanceT.getText()));

One of these.

One of the strings you're parsing with parseInt is empty.

Linus says 4 is too little, I think Linus is a little bitch and that 8 is too little. So I use 68. Like a real programmer

>calendar.getTime().getTime()

But 68 is not a power of two multiple of 4, so it makes it less efficient for the compiler to process.