Is there ever a reason to use linked lists?

Is there ever a reason to use linked lists?

Other urls found in this thread:

youtube.com/watch?v=YQs6IC-vgmo
twitter.com/SFWRedditGifs

>what is constant insertion time

>what is amortized analysis

Stroustup gave a talk where he compared the performance of linked lists and std::vector: youtube.com/watch?v=YQs6IC-vgmo
His point is that people often think that they should use linked lists when they know they will have to add and remove elements frequently, but they forget that random access in linked lists is very costly. Linked lists still have a use though, but they should only be used when you don't really need random access, but you need to insert elements after/before the current one frequently.

Nop, some better implementation will use node big size to constant access,another structure will simulate list but begin like trees.

Just as simple data structure base on nodes.

Rarely. Linked lists remain useful in general purpose when (1) you often need to insert an item immediately before or after some other item, by reference, or delete items by reference; (2) you never need to search the list; (3) you very rarely need to walk the entire list; and (4) you still care about the order of the list, and thus you couldn't just use something like a set instead.

Linked lists also serve an important purpose in real-time programming, where having things take a predictable amount of time is more important than high overall performance.

Both applications are pretty uncommon, though. The vast majority of cases where a linked list apparently make sense could be more efficient as a vector.

If linked lists are so useless.

Then how come they made a language around them (LISP) which survived longer than pretty much any other language?

That was incredibly informative. I had never considered the effect of cache misses in the array v. list debate.

Also, seeing how C++ handles instances was neat. You can see how C With Classes grew out of C. Instead of a reference to the struct, have a reference to to a pair of references: one pointing to a struct with the instance data and one pointing to the class definition.

Neat!

Lisp has a cult following though.

Then why did it get replaced by C everywhere?

I don't think I've ever used them.

I use a funcitonal language, and I only ever use lists when I want to iterate over them, because their performance is so abhorrent when you need random access.

i use linked lists when writing sorting programs.

Thread solved, go back to your hovels

Because they're simple. Lisp is an extremely minimal language (at it's core).

It didn't.

C has a very different audience.
LISP was and is mostly used for AI, specifically because it's easy to generate LISP code.

It is good for constant insertion or removal time for infrequently used data structures.

*flips a bit of your linked list with a cosmic ray*

>not using HAMTs everywhere

AI is a big meme. Least we got Emacs.

dynamically allocated arrays and branch prediction kills linked lists for most purposes

For those that don't or haven't used linked list, what do you usually use? (normal choices, compared to edge cases)

I just always use arrays. rarely dynamically allocated.

Vectors, and occasionally deques.

I used one for a real time embedded scheduler.
You want O(1) retrieval of the head, and short insertion times with minimal memory restructuring.

A more clever but slower scheduler is better than a fast but stupid one

It depends on the application. When you only have a few tasks which are mostly deterministic, a very simple scheduler does the job as well as a very complicated scheduler.

You use it when the data section can fill up a cache line, then it makes sense.

What a stupid fucking dichotomy. He said it was embedded, his needs differ from your kode artisan made with

No

Let's see. Going through my code I see that I use LinkedLists two places:
1) In a rectangle packing algorithm which is basically a bunch of "iterate through"s and "removeIf"s.
2) An Undo history which is basically just a Stack that can sometimes behave as a Queue.

His argument is completely retarded, though, as it assumes that you'd do indexed access to eg. get to elements to remove. The whole point of using a linked list is that you have direct access to your links, making deletion time constant. It shouldn't have to be said that if you want to do indexed access, you shouldn't be using linked lists.

>to make stacks
>to make queues
>to make duques
>to make pretty much any data structure
need i say more?

This is probably the most misinformative crap I've seen in a long time

How?

Isn't the implementation of lists in Common Lisp actually a bit more complicated (and performant) than the standard textbook example of a linked list? I forget what they're called.

>AI is a big meme
You are so dumb

The way Lisp did AI decades ago was indeed a meme.

>stacks, queues
>any other data structure

>constant insertion time
You're probably not going to get constant insertion time because you have to malloc a new node every time you want to insert a value. I believe there exist constant time malloc implementations, but you're only going to find those in some real-time systems.

I don't know why you'd ever care about constant insertion time over amortized constant insertion time. I can't imagine a legitimate reason to use linked lists on a system that needs real-time guarantees.

>all those white people

but muh diversity

Wouldn't you want a heap if you wanted a realtime scheduler that could manage priorities? You would just end up with faster insertion times and you wouldn't have to malloc every node, you can turn an array into a heap. A side benefit of an array is that it will have better caching behavior than a bunch of malloc'd nodes.

When the older lisps were designed, RAM accesses took 1 or 2 CPU cycles, so Cons cells made perfect sense to use as they are quite handy for many tasks.

Since then, the ratio between RAM access and CPU speed has just kept increasing. Now that they are several hundred CPU cycles, overempathizing cons lists make a lot less sense. Which is why newer lisps like Clojure explicitly make it easier to work with better data structures such as HAMT's (which it calls persistent vectors).

Julia avoids lists as a core data structure completely, and uses a dedicated data type to represent its AST instead of shoehorning it into cons cells.

n in malloc is dependent of the size of the memory allocation, not the size of the array
If I allocate 10,000 of the same size data to an array, it would be 10 times faster than if I allocate 100,000 of the same size data

>I believe there exist constant time malloc implementations, but you're only going to find those in some real-time systems.

Found the C fag.

Constant time allocators are so common in C++ that even std::list is parameterized to work with it.

Right, lists in functional languages like Haskell are basically there for convenient control flow, not to store data in. A lazy list is basically a memoized iterator. With linear types, a compiler could potentially even eliminate the allocations if they aren't used.

Inserting and removing linked list nodes has nothing to do with malloc. Inserting a node just requires changing some pointers. The node itself can be allocated, it's a struct comprising two pointers, so you can easily just allocate it in a pool.

No, it's a meme.
True artificial intelligence will never be a thing.

That would be entirely dependent on what malloc implementation you're using. A lot of them are nondeterministic.

New is an alias to malloc. The time complexity of std::list is ammortized.

If you allocate nodes into a pool, you will eventually run out of memory and have to resize the pool with malloc.

>New is an alias to malloc
At least look up the class before spouting shit like this

That's not an argument. Every data structure will require heap allocations if you fail to predict the number of elements that will be added.

Only for ease of programming in C. It's not faster runtime but is a lot faster to write.

Oddly that's usually the opposite of what C programmers argue for but most C codebases have linked lists scattered throughout. Not for speed, only because it takes 2 lines to implement.

Crypto

Hash the contents along with the hash of the one before it. Congrats, now you have an ovehyped pos data structure worth whatever iduots believe it is worth.

>linked lists have O(1) insertion time when you already have the pointer to the previous node, which you probably retrieved through an O(n) search (as long as you have all the nodes preallocated and will never need more than a fixed number of nodes)
you've sold me

Using malloc in embedded systems is never good. You compile with structs allocated to each task. If I were to use a heap, I would make it from nodes, instead of an array, because it's less computationally intensive.

...

Why would anyone ever use anything but arrays and hash maps? Besides exotic structures like segment trees that real smart programmers use.

this is why vectors are faster (overall) 99% of the time

don't prematurely optimize

>what is cache locality
Contiguous resizable arrays are always better

having constant input of data

guess how arrays and hashmaps work internally

Not through linked lists I bet

Hashtables use arrays as their main backing store. Arrays are contiguous memory indexed using pointer arithmetic. Neither involves linked lists.

>What is bitcoin.

>implying there are pointers

The cache locality argument is bullshit because it assumes you're using a moronic general purpose allocator that carelessly scatters the nodes about. Gee, I wonder why a C++ programmer would assume that...

If you do things correctly and allocate the nodes from a fixed size pool, locality is fine.

Many hash tables resolve collisions by adding elements with the same hash to a singly linked list. There are alternatives but none of the simple ones support arbitrary numbers of colliding entries.

Most hash tables move an item to a different entry in the table on collision, such as hash() + 1 instead of hash(), or a clever scheme along the same lines, without using any linked lists. The type you mention also exists, true, but I don't think they are at all common, for they are far slower.

youtube.com/watch?v=YQs6IC-vgmo

No performant hash table does this, it's just one of the easiest ways to implement a hash table.

>because you have to malloc a new node every time you want to insert a value
Nigga that's the dumbest thing I've ever read on Sup Forums

Open addressing fails if too many collisions occur, as eventually all available entries will be used. This isn't an intractable problem to solve, but it's not *simple*, which was my point.

>std::

Hahahahahahah stopped reading there. Nobody cares about your deprecated language grandpa

tremaux's algorithm

If you want fast Lisp code, you replace lists with vectors, arrays and other more specialized data structures when you've got the logic working.

you've never wanted to make an index of indeces ?

what are you, a normie?

how else are you going to deal with hash table collisions?

>>because you have to malloc a new node every time you want to insert a value
>Nigga that's the dumbest thing I've ever read on Sup Forums
Nigga that's the dumbest thing I've ever read on Sup Forums

Kill yourself, kindly

How come most things you hear even remotely AI related has python in it?

because only autismos care about AI

Something surviving doesn't necessarily mean it's good. Consider black people.

Try implementing malloc without one.

Are you literally fucking retarded. O notation has nothing to do with the actual time something takes. O(1) means that the time of the operation isn't affected by the size of the input. An operation could run in 10 seconds for every single input data set and it would still be O(1). And it's only O(1) to add to the beginning of the list you inbred brainlet.

Queue

Do you know of a generic (std-lib) hash table implementation which does it differently? The Java implementation uses the linked list and gccs std::map is a read-black tree. CPython seems to also use linked lists.

Just curious

>comspci degree
>only value for 99% of jobs is knowing which standard library data structure you should choose
>don't even get that right
lol and you nerds make fun of my anthropology degree

when you need a lot of memory but dont have large blocks available

...

but you're working at starbucks

that makes sense but when does that even happen?

I only use LinkedList as a way to temporary store values and load values from a database (and it's more convenient to use than an array). Other than that, I don't have any other uses for it (but that's probably because I don't do low-level stuff that much).

I'm using one right now. And (you) can't stop me.

AI in the 50's and 60's was about symbolic manipulation and dynamic code generation. Hence, Lisp was popular.
AI in the 70s and 80s was about logic databases and how to learn axioms. Hence, Prolog.
AI now is just about scaling statistics algorithms, hence we have Python abusing boost.python to have access to C/C++ linear algebra libraries.

That's a dumb fucking question. Linked list has very specific use case - when the entries must keep their order when you insert or remove items but without having to comb through the whole memory page or sort the whole thing every time you do it. Otherwise yeah it's pretty shitty and even linked list CPU and compiler optimizations don't put it on a level with arrays. Most of the time I just use gapless arrays because I don't care about order and O(1) removal complexity is nice.

Even then you should chunk it out. Have a bunch of 1024 KB chunks of consecutive data or something attached to a binary tree for virtually-instant access. Or a linked list for the over-structure if you're only doing iteration.

Unless random-index insertion/removal is needed. But if you have a 4GB memory block that needs random-index insertion/removal then it kind of sounds like you're going about the problem the wrong way. Maybe use an actual Database?

Are you niggers seriously not using memory pool for linked list items, mallocing every single one of them instead?

Unless your objects are very large, you're just trying to out-smart malloc at that point. Just let malloc do its thing; it's better at memory management than you.

Unless all of your shit is the same size, or your create and delete them in packages without leaving anything behind, then there's going to be very bad memory fragmentation very soon. There's a reason virtually all serious video games use memory pool, and it's not because malloc is smart.