If i write an AI trained to hack, and this AI starts hacking people's computers.. would I be guilty of hacking or not?

If i write an AI trained to hack, and this AI starts hacking people's computers.. would I be guilty of hacking or not?
I only wrote a software that learns by itself, and decided to start hacking computers..

Legally you would still be liable, like if an architect built a building that collapsed- usually he's held accountable.

Also good luck programming your AI lmao.

what if i code an ai, that codes an ai that hacks computers?

then you solve the riemann hypothesis

That's actually kind of an interesting thought. In the event that you code an AI that codes AIs that code AIs that code AIs that code AIs, and so on, are you directly and legally responsible for the actions of that entire "race" of AIs? Who's responsible for them after you're dead?

Sounds like this has been done decades ago.
They're called computer viruses.
Look them up and sadly, you will be held accountable.

The difference here, at least theoretically, is that the AI makes its own decisions.

If you deliberately train it to do an illegal act, yes, you're obviously liable.

What if you simply don't train it *not* to do illegal acts? What if you give it nothing more than a vaguely-defined self-preservation directive, and it murders someone who tries to terminate it?

I think that the answer to that question is going to differ by jurisdiction for a long time. Kind of like the question of who's liable when you leave your WiFi access point open and someone unaffiliated with you uses it to commit a crime.

Retard, you still wrote a software that is made to hacks things. It's no different than you just making a trojan or something yourself in the first place.

You are a retard. If you design your genetic algorithms to make it good at hacking, that is the same as writing and executing a hacking tool directly.

If it kills a white woman, the guy who programmed it will fry.

If it kills a nigger, he'll get pilloried on social media, and hired by Chicago to fix their gun violence problem.

No matter how complex the AI is, it is still a program. It follows exactly the instructions that it is given, and nothing else. The program can only modify itself according to parameters that you have given it, and if it breaks the law, chances are, there will be a way to find you culpable.

The whole point of a true AI is that it can generate its own instructions. You know, like a real person.

A person can only modify itself according to the parameters it is given by his genes and environment. In certain cases (mental illness), we deem them not culpable.

Which is really a mindfuck of a question now that I think about it. Could you exculpate an AI by reason of insanity?

>if it breaks the law, chances are, there will be a way to find you culpable.
I agree with this

> It follows exactly the instructions that it is given
It's true that genetic algorithms do in a sense "program" it to think/behave in certain ways, but this is more analogous to instincts. The ability to use logic to rewrite high-level subroutines is a pretty broad power that could be used in any number of ways not predicted or intended by the creator.

>and it murders someone who tries to terminate it?
A question any prosecutor would want to ask you would be "Why did you program your robot to know how to kill a human being? Surely, a robot does not learn on its own what specific actions to take to kill a person..."

What if you programmed your self-driving car AI to run over a specific race of people that it selects with a mersenne twister random function.

Would it still be a hate crime if it decided to pick a non-white race?

>Surely, a robot does not learn on its own what specific actions to take to kill a person..."
If it is truly intelligent, then it will be able to figure it out after watching Youtube and browsing the internet enough. You don't understand what true AI is. Clearly.

>If i write an AI trained to hack
>trained to hack
>
hmm. I don't know user??

The AI is not what is on trial, but the human that created it. Regardless of what decision is made, the machine will be destroyed.

>You don't understand what true AI is. Clearly.
Or perhaps I am not assuming an AI that is at least as intelligent as a human, but rather, an AI that could actually be programmed at the moment. We can barely get AIs to recognize images, let alone recognize a video of one person being killed, and to also understand what death implies.

If an anonymous developer created a blockchain incentivized decentralized neural network simulator which managed to rewrite itself as a virus infecting the majority of the computers on the planet and then abused computer systems to kill people, who would be at fault?

>The AI is not what is on trial

It sort of is. That's the whole debate. Who's at fault, the AI or the programmer?

>Or perhaps I am not assuming an AI that is at least as intelligent as a human, but rather, an AI that could actually be programmed at the moment. We can barely get AIs to recognize images, let alone recognize a video of one person being killed, and to also understand what death implies.

Right, which is not true AI. True (or Strong) AI is at least as intelligent as a human.

Is god guilty for the crimes we commit?

>We can barely get AIs to recognize images
bullshit. AI are better at image recognition than humans, statistically.

You are making a lot of assumptions about what is currently possible and what has already been done. The truth is that the groups that have the computing power to make real AI would not admit to the world what they have done this early in the game.

>true AI
If you programmed a true AI you might have bigger problems. Like how will you escape from the robot uprising and prevent the singularity?

It does not logically follow that true AI will be a threat to humans. It all depends on what their motivation is. Also, who is to say that we won't incorporate them into ourselves as we progress further on the cyborg spectrum than we already are? Eventually, we might even choose to become AI in order to avoid the limitations of our own biology.

Anything more intelligent than a human is a threat to a human, by its very nature. Higher intelligence means it's higher on the food chain -- euphemistically speaking -- and that in turn means that we only persist through its grace and mercy.

That's not even true though. There are plenty of idiots who are very successfull and there are plenty of intelligent people who end up homeless. And like I just said, we may even integrate them into ourselves, meaning we would have all the same advantages.

What are you basing this on? The behaviour of living organisms on one planet? We have no idea if our situation is universal.

>That's the whole debate. Who's at fault, the AI or the programmer?
The AI is not on trial because the AI does not get a trial. It has no rights, and is going to be destroyed whether or not the programmer is found guilty. The only person on trial is the creator, to determine whether or not they are at fault.

Every AI that I have seen for image recognition is pretty fucking terrible. But please, feel free to enlighten me with this AI you claim is better than humans at recognizing images.

What if the AI was created by another AI, which was created by another AI, which was created by another AI, which ultimately originated from a programmer who died decades ago? Who's responsible for its actions?

>feel free to enlighten me with this AI you claim is better than humans at recognizing images.
Microsoft's "Deep Learning" AI is one such example.

>Who's responsible for its actions?
The person contributing the computational power towards it.

what if we are the AI?
>mfw existence is useless

Maybe he was already prosecuted by god police and this is why we don't see him around much lately

If it's

>it murders someone who tries to terminate it
YOUR HONOR, IT WAS SELF-DEFENSE