Just wondering from a pure software engineering perspective

Suppose you're implementing a linked list.

And you want to add a list_length function.

Which one is better:
>having this function iterate over the entire data structure each time you use it, giving O(n) time complexity
>Have a field that is incremented each time you add a node, (--) each time you delete a node. This adds a bit of bloat, but you get O(1) time complexity.

Well?

Computers these days have a shitton of memory but still kinda low number of CPU cores. And if you are working in low level language, you still have to allocate and eventually free the memory for objects you store, so you need counter anyway.

>2018
>still using linked lists
lmao

There's always a poster who just blasts autism everywhere

B

Depends on how many times you are going to use the length of the list.

But if you are using the length of a list many times, you are probably better of using an array/vector. Lists should only ever be used as a control structure, when you know the compiler is smart enough to optimize out any actual list allocation.

if the average list length is cheap in terms of computation power, A.

You should always be looking at the data you have and not reference it from somewhere else

Always prefer wasting memory over processing. 90% of optimizations are caches.

I'm no software engineer, but I'd say the latter: with my limited experience in Java, I can say that you can declare a private static variable in your linked list class and use that to keep count.

B.

B is just eight bytes per instance, four or even two if you can be pretty sure this list isn't going to be absolutely stuffed to the gills. If you have enough linked lists that this actually starts to become a problem the contents of those linked lists will (usually) have long since exhausted your RAM anyway.

A on the other hand is a cache miss adventure trying to load n items from totally random locations in RAM, pointlessly causing tons of accesses to main memory (and cache evictions too) every time you need a size value.

If you're going to talk about cache miss adventures, might as well tell him not to use lists at all.

Yes, it's also the reason why we generally don't use linked lists, but if you've decided for some reason that you're going to use a linked list you might as well make it easier on yourself.

sure

>cache miss on linked lists
premature optimization

Oh fuck off.
>"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

>the speed of noncritical parts of their programs
Fundamental data structures are not non-critical.

B, not having length on lists, arrays or strings in constant time is ugly
>this adds a bit of bloat
really just single integer per whole list

This, unless it's a web browser. If it's a web browser, just delete it.

Easy, a length counter, also, an extra node called "struct node *lru;".

Naive ones are.

And what if modifying the data structure is in the performance critical path of the code then ?

>>Have a field that is incremented each time you add a node, (--) each time you delete a node. This adds a bit of bloat, but you get O(1) time complexity.
Duh dumbass and no, 32 fucking bits isn't going to kill you even on an embedded system.

On an embedded system you'd only need 8 bits for array size.

stick figure saying Neat is so cringey

I was referencing (int) type which varies per platform architecture but defaulted to 32-bit vs 64-bit type per respective arch. I have no clue what the hell you're referring to regarding array size as I'm referring to data primitives... And if you're trying to say 8 bit (a byte) as in uchar is all you need then sure.. you ofc can use it. However, you better know how big your linked list could be before trying to be a pro optimizer (saving) a coupe of bytes and getting blown the fuck out when you're LL exceeds 256 entries which is trivial. So, anything else to add nigger?

> he thinks yoi can implement efficient unordered mapping container without linked lists

This depends on the use case, duh.

The true redpill is that there is no such thing as a generic data structure.

>Suppose you're implementing a linked list.
First thing I do is get myself checked for brain damage.
>hurr durr they're gud tho
have fun chasing pointers and ruining locality then pajeet

Store the goddam length in a variable. Bloat? 1 cycle for each add and delete? Wut.

>you need counter anyway
This is wrong in so many ways I don't know where to start.