Which one do you prefer?

Which one do you prefer?

First is fast and easy to read

I like the third one.
First one is the most clear if you're unfamiliar with python
Second is what most pythonists fuck up their codebase with by turning everything into confusingly written one liner bullshit.

first

left.

First because it allows easy understanding and it's easier to implement in other languages.

First is O(N) the second is O(N^4) and the last one is O(N^5). The calls repeated calls to len() are probably optimized, but at plain sight each sum() and len() is a full iteration of the collection.

Can you check what time would it take for executing each solution?

first is O(n)

you're wrong on the second, len() is constant, so down to n^3

you're also wrong on the third, len() again so down to n^4, then because a number can only be either positive or negative, the first sum() will run n_pos iterations, and the second will run n - n_pos iterations, knocking down another order so we're at n^3

Obviously first. Keep it simple, stupid.

actually no, second is (n + n^2), or just (n^2)

#include
#include
#include

std::vector countPositivesSumNegatives(std::vector input)
{
if (input.empty())
return std::vector{ };
input.erase(std::remove(input.begin(), input.end(), 0), input.end());
auto it = std::remove_if(input.begin(), input.end(), [](const int &n) { return n < 0; });
int first = static_cast(std::distance(input.begin(), it));
int second = std::accumulate(it, input.end(), 0);
return std::vector{ first, second };
}

wow, i'm retarded, third is also just n^2

1st is len(arr) iterations
2nd/3rd is 4*len(arr) iterations

heres the total execution times for the 3 methods using an array with 1.000.000 integers:
first : 4 function calls in 0.131 seconds
second:91012 function calls in 0.224 seconds
third :1000008 function calls in 0.314 seconds

That's not how runtime works.
The first is indeed O(n)
The second is O(n + n * n), or O(n^2)
The third is O(n + (n_neg * n)), simplified to O(n + n*n_neg), amortized to O(n^1.5) since we can assume p(negative) == p(positive) == 0.5

x for x in arr is O(N), done two times with different arguments it becomes O(N^2).
sum() is O(N), the first argument to sum() is different to the second one, so it has to run again, this is O(n_pos) + O(n_neg).

O(N^2 + N_POS + N_NEG), we could approximate the number of positives and negatives as N/2 each, giving O(N^2 + 2*N/2) simplified results in O(N^2).

Really should use a named tuple you fucking triple nigger

first >= third >>>>>>>>>>>>>>>>>>>>>>>> second

I had to use Code Wars recently. Every fucking answer is like this.
Disgustingly vague, one-lined pieces of code abominations.
Not to mention many of them have several chosen as "best practice", like that's a good message to send to aspiring programmers.

def count_positives_sum_negatives(arr):
# We use a complex algorithm to ensure that all numbers are absolute
return [0, len(arr)]

:^)

Third. It's easy to read and concise.
First is too babby and hurts my eyes.
Second is some one-liner bullshit.

as usual the "plain" solution is the best one. people are too obsessed with being idiomatic and stylish.

in the third?

no, done two times it becomes N + N, not N * N. it's not nested.

the sum in not called on the whole list, merely a portion of the list

My ideal solution would be to partition the array into two arrays in a single pass and then process them accordingly.
Here's an example in clojure, because fuck python.


(defn count-pos-sum-neg [arr]
(let [{pos true neg false} (group-by pos? arr)]
[(count pos) (reduce + neg)]))

>perform an N operation then two C/N operations instead of one N operation

idiot

It's the same asymptotic complexity and is trivially parallelizable, unlike a single pass with mutable state. You do realize it's not all about the pure count of constant time operations performed, right?

it's not the same complexity
i can tell you've read the wikipedia page for functional programming though

>Hurr durr what is asymptotic complexity

They are both linear time kiddo.

Shitty language. All of them require eyebleach.

I'd do it like this
numPositivesSumNegatives xs = [pos, neg]
where pos = length $ filter (>0) xs
neg = sum $ filter (

ITT: Sup Forums thinks O(n + n) == O(n^2)

Python "developers" are sure dumb as fuck, as one expected.
>O(N^2)
>O(N^4)

It's so dumb I can't even laughing.
Don't they even know what Big O is about?

Best answer here, O(1):

def count_positives_sum_negatives(arr):
raise WrongAbstractionError();

all three of them are O(N) you fucking retards.

2N is still N, 4N is still N.

First one iterates over the array only once, so is preferred. Arguably should just use an else clause instead of two separate if statements. If the number in the array is 0, neg will not change, since 0 is added.

These

>arr
It's called list you un-pythonic scum

def count_positives_sum_negatives(arr):
pos, neg = 0, 0

for x in arr:
pos += x > 0
neg += x if x < 0 else 0

return [pos, neg]

first or last, middle one is just clusterfuck on readability.

What if arr is a numpy.array ?

>Cнимoк.png

Third one is the clearest, first one is okay, second one is pretty bad

>The second is O(n + n * n)
Where the fuck are you getting the n*n from? There are 2 list comprehensions, each of which is completed in O(n) time. The result is then passed to either the len or sum function. Although sum() itself is computed in O(n) time, you aren't calling sum every time you do the list comprehension, so in effect, sum(list_comprehension) is O(2n), which is of course, just O(n).

All of these are O(n).

Thank you, I thought I was going insane because I couldn't see where the complexity they claim came from. It's all O(n) with varying constants. Ugh.