Why is it that older C++ projects almost always ignored the C++ standard library and had their own 'in-house' solutions?

Why is it that older C++ projects almost always ignored the C++ standard library and had their own 'in-house' solutions?

Other urls found in this thread:

groups.google.com/a/chromium.org/forum/m/#!topic/chromium-dev/EUqoIz2iFU4
twitter.com/NSFWRedditImage

Because you can get better performance if you cut a library down to only what you are needing.

These days people just shovel in libraries and APIs to use one function and don't give a shit.

They are like Sonic. Gotta go fast.

c++ before tr1 was kind of a joke

isn't the c++11 standard library these days faster than what most people can cook up?

It would be faster still if you cut it down to only what you were using.

Fact is that not everyone is skilled enough to take the standard library now, which has probably been improved due to some of the in-house solutions to eek out more performance, and get even better performance out of it.

Because if you consider an "older" C++ project to be one written in the ancient times of 2016, then the C++ standard library had no way to deal with directories, so you either had to roll your own or use platform-specific libraries.

The point being: the C++ standard library is very barebones, and still is.

This is actually the answer. The little bit of resources that they saved with their in-house solutions was actually worth it. Nowadays, most computers are powerful enough that it doesn't make a difference, so they just use the libraries.

>has complex equations
>didn't include complex numbers along with reals, rational, etc
Just why.

because inheritance was a mistake

groups.google.com/a/chromium.org/forum/m/#!topic/chromium-dev/EUqoIz2iFU4

Absolutely not. Stl is made as a catch-all library. If you optimize for your use-case, you'll blow anything in stl (save for vectors) out of the water.
Finance, gamedev, embedded. All soft-realtime applications make their own data structures and algorithms. Stl is only good for smaller projects or prototyping.

>Nowadays, most computers are powerful enough that it doesn't make a difference
This sounds like someone who's peddling their electron crapware would say. I'm pretty sure Atom devs said this in regards to performance issues

C++ libraries and electron are worlds apart in terms of resource usage.

>It would be faster still if you cut it down to only what you were using.
any semi-decent compiler would only include the symbols a programs needs, scrapping the rest. With static linking you can actually ship the little set of functions you need together

You'd be hard pressed to beat the stl implementations, and there's something to be said for not having to maintain extra code.

because before c++11 it was dogshit

because idiocy mostly.
the bigger problem is when idiots refuse to update deps and then 4 years later someone is like, we need to update SQLite or even fucking Boost.

or even better. lets just maintain a shitload of nonfree internel patches to remove those damn deprecation warnings or re add deprecated functions.

This isn't a problem with the standard library, this is a problem with shitty programmers.

>hard pressed
>implementation is usually done by juniors and interns
See eastl for a public library that beats stl by 20% on average and, eastl is still very generic and catch-all library. There are much easier ways to get orders of magnitude, like using 4-way associative radix sort instead of any comparative sort where applicable. The biggest improvements are algorithmic, only then, when you are absolutely sure the best algorithm is the same one stl uses do you actually enter the area where you'd be hard pressed to beat stl.
Of course when you're doing a personal project or are in a small team, writing your own library might be an overkill and stl is plenty fast these days for most software. But when every little bit of performance matters, ditching stl is still wise decision.

Because the STL is slow. They're designed to be generic catch all libraries that work across multiple compilers, multiple versions of those compilers, all with different levels of language support and with differing interpretations of the standard.

I recently wrote a few personal replacements for the slower parts of the STL (internal flat map and tuple). Seeing a 4 - 5x compile time reduction and 20x less code to achieve it.

I think the most difficult problem is that once something becomes standard, its hard (if not impossible) to change it. When newer language features come around, you can't redesign the standard to make full use of them.

Tuple is a great example here - you could easily add a 'size' member, instead of relying on std::tuple_size_v, the same for element lookup. Relying on std::get to retrieve values is rubbish too - with a well thought out metaprogramming library (e.g. Hana) you can retrieve values naturally using the index operator - like every other container.

Once you start pushing the limit of what the language can do, especially on the bleeding edge (C++17/2a), you realise how awful the STL can be.

As an addendum -
This is also why we see multiple things that do functionally the same thing. Tuple replacing pair, range-based for replacing for_each, ranges replacing iterators, lambdas (IIFEs) replacing implementation functions, smart pointers replacing naked pointers, the list goes on.