You have 180 seconds to write a program that will identify the duplicate element (there's only 1 such repeating...

You have 180 seconds to write a program that will identify the duplicate element (there's only 1 such repeating element) from an array of N integers.

You can only use O(1) memory.

For example, given the array {3, 5, 9, 3, 1, 2} you must return 3.

do your own homework

shut up faggot, this is not homework I already know the solution

no idea about the memory requirement cus im an engineering brainlet programming in matlab

find(hist(,unique(a))>1)

Loop on i, compare items at index i and 2*i (modulo total size)

that uses extra memory to construct the histogram

kys

for i = 0 to arrayLength {
for j = 0 to arrayLength {
if( array[i] == array[j]) return array[i]
}
}

at least i wasnt so insecure i ididnt actually post a solutino

suicide pact with me senpai

>implement bubble sort
>sort the array from small to large
>loop through it and check if x = ([x] + [x+1])/2

Simple. Do you have more of these?

hey everyone we've got a O(N^2) faggot here

oh boy we have two of these creatures

also your solution will always return array[0] faggot you had literally one job but you inefficiently give the wrong answer why are you alive

You said uses O(1) memory, which it does
If you meant to say complexity, well you should have said O(N) because O(1) isn't possible

oh yeah, for j = i+1

I said O(1) memory, yes. But it's like asking a brothel for a trap but they give you a faggot. It's the same, but like your solution, is insanely stupid.

that will not solve anything faggot it'll still return the same number just fucking nudge yourself off a cliff

heapsort instead of bubble sort

I don't think it's possible in less the O(n log n)

function arrDup(arr) {
return arr.filter(function(el, pos, a) {
return !(a.indexOf(el) === pos);
});
}
result for [1,1, 2, 3]
would be 1

that algorithm only needs memory to store 3 integers, but big O is used to represent algorithm complexity not memory usage, and my correction makes it correct

This is fucking easy in exponential time... Am I missing something?

BAM BAM INEFFICIENT FAGGOT ALERT

O(N^2) is not optimal. Think.

It is possible to solve it in O(N) time complexity and O(1) space complexity. Maybe you're just a faggot? Have you considered that?

big-O notation is used to represent space too, you numbskull

big O is is used to represent any asymptotic complexity. your solution is correct, I just modified to fit OP's requirements he forgot to mention in original post.

also I just noticed
>x = ([x] + [x+1])/2

wtf nigga just do [x] == [x+1]

well it uses constant space, so you're the numbskull, you should have said performance not memory

I have considered I am a faggot given I am dating a man for past 4 years, but I will give another tought to your problem. I'm not going to fit in 180 seconds however, apparently too brainlet for that

Finally the correct answer. Good work user. You're alright in the head.

Oh I get it. OP should have put the O(N) time complexity constraint in too.
There are only m possible values, where m = numeric_type.max, so you only need that many bits to solve the problem. Keep an array of m boolean values, a, initialised to false. For each number, i, if a[i] == true, return that number, else set a[i] to true and continue.

>wtf nigga just do [x] == [x+1]
That's not nearly robust enough. Simple code has no place in the real world.

No, that's not a constraint. It's optional. But it's a test to weed out the faggots.

wait, do we assume th biggest integer exists? that's kind of cheating

Unless you specify O(N) time complexity the problem is trivial

I wouldn't call it cheating, it's constant on whatever architecture(s) you're developing for. Developing a computer algorithm without a computer to run it on is meaningless.

loop all items and add item to a dictionary
check dictonary if item exists before adding

well in that case you can consider any algorithm O(1). you can fit in RAM only so many cities of the Traveling Salesman problem...

Is there O(n) fast, O(1) wide solution for infinite integers?

Dumbass! An algorithm, by definition, has its complexity benchmarked INDEPENDENT of the architecture. That is how you get the Big-O complexity of it. You clearly have no idea what complexity even is and are probably a brainlet that thinks.

print "Hello"


and

String hi = "Hello"
print hi

are the same. Fuck off, brainlet trash.

Hash table obviously. This is pathetically easy

>what is constant space
>what is computer science

but hey, let's go back to INTEL VS AMD VS NVIDIA debates when half of Sup Forums doesn't even know basic CS

both of those are O(1)
why are you calling other people dumb when you don't know what you're talking about

>instead of ints, elements are structs with many elements
Have fun using combinatorics to declare an untenably large array of booleans, user.

>An algorithm, by definition, has its complexity benchmarked INDEPENDENT of the architecture

Proof?

why are there people in this thread who thinks hash tables are O(1) wide?

If we are going to pick nits and consider infinite integers, then technically you need O(log N) bits of memory to iterate over the array, not O(1).

Here, this one is nlogn. It does a quicksort on the array, and if there's a duplicate it crashes.
#include
#include
void swap(int* arr, int a, int b) {
int t = arr[a];
arr[a] = arr[b];
arr[b] = t;
}
void mqsort(int* arr, unsigned size) {
if (size pivot) {
right--;
} else {
swap(arr, left, right);
}
}
}
mqsort(arr, left);
mqsort(arr + left + 1, size - left - 1);
}
int main() {
int size = 10;
int arr[10] = {};
for (int i = 0; i < size; i++) arr[i] = rand() % 100;
mqsort(arr, size);
printf("done\n");
}

>Thinking assigning a variable is not an added step.

You are an idiot. Assuming you are reassigning every time for a new variable (which is the obvious implication), it is O(2N)

PLEASE BE TROLLING

That is true if you consider real computers. Most algorithms given complexity is for RAM machines.

>trying to respond to OP seriously

>Sup Forums
kys faggot

You dont understand what big O notation means, which is kind of embarrassing if you want to pretend you're so much smarter than the people arguing about consumer electronics

Not independent, it's usually an abstract RAM machine (if it's ment to be run on real computers).

You can't be independent because there can be a hypothetical architecture with O(1) array traversal.

By the definition of big-O notation. It is used to benchmark the performance OF THE ALGORITHM.

A theoretical measure of the execution of an algorithm, usually the time or memory needed, given the problem size n, which is usually the number of items.

It is a mathematical representation of the number of steps it takes to complete an algorithm. A sorting algorithm that runs in O(nlogn) time will run in O(nlogn) if it is run on Debian or an ENIAC. It does not matter. Why am I even telling you this though? If you do not even know what Big-o notion is, you clearly are not ready to even begin optimizing anything.

>O(2N)
I stopped taking you seriously around about here...
Whether you are correct or not, I doubt the value of any information that comes from you beyond this point.

I don't see any way to do this in O(n) time and O(1) memory, OP. Care to enlighten us?

>For example, given the array {3, 5, 9, 3, 1, 2} you must return 3.

0011
0101
1001
0011
0001
0010

Can you xor each element of the array, then negate?

>Ignore all statements because of one error.
Yeah, good reasoning. I see you are perfectly mentally fit to be judging anyone.

OP is a troll. It is impossible to do it in O(1) memory because how would you keep track of the element you are on?

do you even know what O(1) means

>OP is a troll. It is impossible to do it in O(1) memory because how would you keep track of the element you are on?
He has already specified that one integer counts as O(1), rather than the technically more accurate O(log N), for the sake of this exercise.

One operation.

lmao
Please go back and read a book.

static int CuckFinderV1(int[] array)
{
long register = 0;

for (int i = 0; i < array.Length; i++)
{
long bit = (1

memory, not operations, and O(1) memory means you use a constant amount of memory that doesnt scale with the size of the input

>Invents some excuse that violates the original premise
OP is a troll. I already stated that. If he wanted that to be the rules, he should have specified it before work began on the solution and I for one am not going to do homework for an OP that clearly does not even know big-o notation and is misusing it.

start.program() {
use-only = O(1)
array1 = 3.5.9.3.1.2
if givenarray array 1
do return 3
}

return 0;

python 3

list = [5,2,3,4,4,6,7]
for i, val in enumerate(list):
new_list = list
del new_list[i]
if len(set(new_list)) == len(new_list):
print("duplicate value is {0}".format(val))
break

>If he wanted that to be the rules, he should have specified it before work began on the solution
I don't necessarily disagree, but in fairness it should be noted that outside strict complexity-theoretical contexts, operations like A = B + C are often assumed to take O(1) time and produce O(1)'s worth of results for simplicity's sake, unless they are talking explicitly about bigintegers or shit like that.

So the autist in me agrees with you, but OP's notation IS common practice, for better or worse.

Which is why it is a useless distinction. So, to OP,

int[] stupidNigCode = new int[10000000000000]

and
intSmarterNigCode = new int[2]


Are fine because they are constant? Fuck off. No, seriously, fuck off. If you think using more operations to do something, while keeping memory the same in modern computers makes sense, you are a dumbass not worth talking to. Yes, it is constant, but it is a useless distinction when time complexity is universally more important than space in modern electronics. Also, OP has already changed the rules part-way through, so the original premise is void.

Is memory the only requirement? Heapsort and then traverse, keeping in memory the last seen element until there's a match between that and the next element. Easy, bird.

This takes at least O(N^2) time and O(N) additional memory. It might take O(N^2 log N) if that set is a tree.

nvm I realized this won't work for {0,0,1}

>Thinks supporting bad practice is good practice.
It is not about being an autist. It is right and wrong. If OP thinks consuming extra memory, but not accounting for it in complexity is good, he is a fucking retard.

I'm just telling you what big o notation means I'm not telling you how to efficiently program
a byte or a megabyte of memory is still O(1) if it's a constant and doesnt depend on the size of the input
and yes memory complexity is irrelvant next to operational complexity

maybe its o(1) memory
[3, 5, 9, 3, 1, 2].forEach((e, i, a) => {if(a.indexOf(e) != i) process.exit(e)})

Doesn't O(1) memory just mean the memory usage is a constant that doesn't vary with the size of the input? Then where does the array I'm traversing get stored? If the array is 2 elements vs 10000 elements then why don't I need more/less memory to store it?

What's the "memory complexity" of this function?

void hello(int arr[]) {
printf("hello\n");
}


I guess you just only count the additional memory required by the program, not including storing the input?

>If OP thinks consuming extra memory, but not accounting for it in complexity is good, he is a fucking retard.
But that's clearly not what OP actually thinks. What he actually thinks is that he meant O(log N) additional memory, written per common practice as O(1), and assuming that you know what he meant (for it's common practice) whether it's technically accurate or not. Which I am quite happy to look down upon with a condescending and disapproving gaze, but it does not make him a retard.

>I guess you just only count the additional memory required by the program, not including storing the input?
yes. the memory complexity of that is 1

Nope. Indexof increases the stack.

something with maps/dictionaries?

That certainly works, but it takes more than O(1) memory.

...

Okay, so then you take in the array. Make a big array with length = biggest valid integer +1 (because we need an extra spot for 0) and then go through the input, checking/setting bigArray[i] = true for every value till you come across bigArray[i] = true, then you print i and quit.

Seems easy? What am I missing here.

Nice self pic. Did you take that with your phone? Guessing you did not store it on your non-existent SIM.

In-place sorting algorithm
Traverse the array

...

disgusting

the fact that you're using 512 mb of ram for a simple problem?

t. I am a freshman CompSci student

So OP, when are you going to reveal your solution to us? I'm quite curious now.

According to OP, memory is totally irrelevant, which is stupid. Even more stupid is his ignoring of time complexity, but OP is not too bright.

The trick is you xor all the numbers and you get the result. But that's basically a hack, the formal most efficent solution would be

OP is trolling. They do not have a solution. We are doing his homework for him.

Radix sort.

i missed you knifebird

Oh, right. I think the OP should have said that I shouldn't use a shitload of memory, but maybe that's supposed to be obvious.

>you xor all the numbers
I find these solutions neat but IDK if they actually demonstrate good programming ability?

In place sorting the array seems incredibly obvious though, I feel pretty dumb for not thinking of that. What's the best sorting algorithm to use here and why?

You realize that creating a set is adding to the memory complexity?

>In place sorting the array seems incredibly obvious though, I feel pretty dumb for not thinking of that.
Always consider sorting your data

Just linear search and compare.

>I find these solutions neat but IDK if they actually demonstrate good programming ability?
they dont, he just read the trick on the internet and wanted to brag about it on Sup Forums. Choice of sorting algorithm is the same as choosing a sorting algorithm anywhere, depends what kind of data you're putting in

>The trick is you xor all the numbers and you get the result.
Elaborate?

>But that's basically a hack, the formal most efficent solution would be
That one uses O(n) memory.

And the memory overhead you are willing to accept. Otherwise, you would just Heapsort everything.

>Elaborate?
google it

If you want to use O(1) memory do this but its computationally slower and saving cycles is more important than saving memory

Lessons to learn from this thread
>O(1) memory complexity is a joke

I did google and could not find anything applicable to this problem. So can you elaborate?

Reminder this isn't O(1) memory since you need logn bits to store the indexes.