Post-AIpocalyptic

Within most of our lifetimes a country will use a fully autonomous (e.g., machine learning-based) weapons system. There are treaties to prevent this, but let's be real - the enormous advantage in warfare that it would confer will be too much to resist. Assume that this doesn't destroy the world - not because we can't reasonably expect it to, but rather because we'd all be dead so it's kind of pointless to consider. Autonomous warfare will transform geopolitics at least as dramatically as the atomic bomb did, but how might that look? How will it affect our everyday lives?
If you don't believe in nukes or machine learning feel free to sage, I believe in them and know that the above scenario is pretty much certain to happen.

Other urls found in this thread:

bbc.com/news/technology-38569027).
youtu.be/qv6UVOQ0F44
arxiv.org/pdf/1502.01852v1.pdf
drive.google.com/file/d/0B1T58bZ5vYa-QlR0QlJTa2dPWVk/view
us.battle.net/sc2/en/profile/3041644/1/AlphaGo/
twitter.com/NSFWRedditVideo

>when AI is the only thing Sup Forums struggles to apply expert opinions to and that everyone believes they're entitled to an opinion
Do not make or post these threads unless you have a STEM degree.

Sage.

You don't know that to a certainty. A hundred years ago, people swore on their mother's grave that everyone driving flying cars would be a thing by now. Futurists are full of bullcrap.

plot twist i do

Then you know NN only work with large solution spaces and that other algorithms require an in depth knowledge on the subject to design, basically only an idiot hasn't realised the singularity is impossible, there isn't a programmer alive who is self aware enough to emulate themselves with code to write a program that can code programs.

But you're not, you started this thread.

Machine learning could feasibly be used to recognize targets in a warzone. Apply that to weaponized quadcopter-like drones and you have a deadly force that is a thousand times more cost efficient than sending troops in.

It doesn't take a genius to think up any number of applications for deep learning to weapons and war.

The difference is that we already have the tech to make it happen - and likely have in classified engineering environments (though it may need some time to mature). It is not yet politically feasible to use such technology though.

>could feasibly be used to recognize targets in a warzone.
With a 80-95% success rate at best... Oh dear those children included in that 5-20%.

This is like the joke about the engineers who designed an aircraft and realised they're in their own design, so they all abandon ship knowing it's dangerous except the programmer, who knows it won't even take off.

Your ideas about replacing everything with AI won't take off.

The singularity and self operating drones are very different things. It's a large step up from self driving cars, but the same idea.

Because big fucking bombs have a higher success rate right?

The dataset for image recognition (which would be used to identify armed opponents) is near unlimited thanks to google/capcha and countless government research efforts.

The drone flying and obstruction avoidance tech is nearly good enough to navigate complex urban environments.

The military has already tested drone swarms as a means to accomplish military objectives (see bbc.com/news/technology-38569027).

You are putting your head in the sand if you are incapable of seeing that drones will be a major part of war in the future, and likely a means of control after that.

Obviously not a STEM guy, so can explain why this
youtu.be/qv6UVOQ0F44
Kind of program couldn't be used for programming or warfare?

Certain applications like counter battery operations would be made much easier. Look how strong AlphaGo is compared to top professionals. This kind of learning can be applied to optimize fields of fire, battlespace shaping, logistics.

Identifying enemies will still be up to humans.

I'm at work and don't have much data, can you explain it? In general, everything can be used for war, my current favourite is islamophilia
>when you actually believe a newly industrialised civilisation is outsmarting the current leading atomic age civilisation
Haha, sure, yeah, they totally haven't been watching you since you've stepped foot in the country, they're not waiting for you to meet someone they actually want to arrest.

Intelligence is nice, but I'd rather have a ops team in a dark room staring at computer monitors and yelling in my ear.

Humans are very effective tools, don't sell them short. They're still the most intelligent things currently existing.

What if the military AI goes full tay?

More applications:

Imagine a very realistic first-person shooter game. Now allow Deep Mind to make an AI which would optimize squad or platoon level tactics like CQB in buildings. Then expand the battlespace to include combined arms operations. The AI will probably think of very novel solutions. If the enemy is capable of using the same tools, then it gets really crazy. Just look at how AlphaGo handles humans and how it plays against itself. Every move it makes has a purpose that manifests itself dozens of moves later. In war, the results would be both beautiful and scary.

Things don't go full Tay, tay did what we call overfitting which means mimicking training data. This is actually a good thing for artistic nnt but actually experts will disagree.

If you turn off training, things can't go full Tay.

Just program it to dust everyone that fits a certain size metric with ketamine or suboxone and then roll in the kill teams ala Moscow theater crisis.

It was a joke, user.

>Humans are very effective tools, don't sell them short. They're still the most intelligent things currently existing.
Humans are great but if a process is repetitive, iterative, or algorithmic, it's better left to a computer or machine. Humans will still be needed to ID targets and set the agenda. This isn't selling humans short. It's humans using the most of their resources.

>arxiv.org/pdf/1502.01852v1.pdf
Microsoft deep learning surpasses humans in image recognition

Now imagine instead of the difference between go-kart and racecar, its the difference between armed combatant and civilian.

>drive.google.com/file/d/0B1T58bZ5vYa-QlR0QlJTa2dPWVk/view
Google AI uses image recognition to recognize tumors in microscopy images, beating out experts in the field.

It is also used in all large search engines and google/amazon advertisements for targeted content. There are limitless military applications to this. The only thing stopping us from using this tech extensively is because it is scary as fuck to let an AI decide things like this.

Image recognition is something that can be trained, it's exactly the monotonous activity described by Whether it's strategic to fire or not is a human decision and doesn't have a surely certain solution to it, that's another thing humans can do which computers never will, have responsibility.

To add to this, tech leaders are already assuming automated cars will 100% for sure be the way of the future because they are less error prone than people. Why does this magically not apply to military?

I bet someone will try to make a good case about using AI for target recognition because it is less error prone than people.

Bees user weaponized bees

>Google AI uses image recognition to recognize tumors in microscopy images, beating out experts in the field.
This.

If you've ever seen thermals/IR from drones or air support, you'll know it's quite easy for a computer to pick out human silhouettes. Human operators on the ground need to indicate the area to light the fuck up and a computer can hit all the appropriate silhouettes. The computer will be able to compensate for all factors that affect ballistics.

>Post-apocalyptic
>"""""""""Post""""""'''''"'
That's the thing boy, there is nothing left after the apocalypse

>computers can't be strategic
>but somehow can still beat world class chess and checkers players
>can evaluate and analyze 1000x more variables in a fraction of the time at 100% accuracy and apply ROE flawlessly, never has morale problems and acts only as programmed with tested logic and tactics
>but somehow people will always be better at it

People have flaws and off-days. AI doesn't. If there are exactly 100 variables to consider in a given situation, the AI will always properly weight and analyze all 100 of them with unvarying precision. A person is very unlikely to take all variables into account or even know the right variables to consider.

There is no way an automated force with mature technology and algorithms could be beaten by humans.

AI systems will not be realized for another few centuries at the minimum. The human brain process's thoughts in 11 dimensions, and we can barely manage binary systems. AI is an extremely long ways off from being tangibly created.

>Whether it's strategic to fire or not is a human decision and doesn't have a surely certain solution to it, that's another thing humans can do which computers never will, have responsibility
Exactly.

Don't forget training. AI can play FPS games with custom levels to optimize CQB for personnel on the ground. Let's say troops need to learn how to secure a particular building but for whatever reason don't presently occupy it. Make a level in the FPS game with the same parameters as the building and have it play out some solutions. This is what DEVGRU did to train for the Osama Bin Laden operation in Pakistan. They had to build a shoothouse similar to the compound and try out different permutations for the interior (ISR of the interior wasn't available). But an AI can work out these solutions much faster.

This. Policy and agenda get set by humans. Otherwise let the computer run the algorithms and come up with solutions. That'll free up time for humans to work on more important things. No one designs cars by hand anymore. It's all algorithmic.

Australia is probably just afraid to lose his job tbqh.

Chess is a game of finite possibilities that can be tested iteratively, chess AI aren't very sophisticated it just keeps picking the moves with the most success outcomes.

Real life is not a game of infinite in which testing it would inevitably mean failure and in the context of this thread, the death of our soldiers on our side.

Just because chess ai exists and you can't play chess doesn't necessarily mean a chess ai is smart than you.

AI here isn't actually anything more than a strategy aid and simulation device, no one is impressed by that. Obviously very useful software but people here think they'll command platoons of bots if they sign up to the special forces.

>AI systems will not be realized for another few centuries at the minimum.
AlphaGo and the Deep Mind team have already made brilliant strides. Don't fall behind.

What game do you want the AI to master to prove it's strategic prowess?
Starcraft?
A grand strategy?

EMP
Goodnight technological advantage, hello mr rock-in-a-sock.
Machine reaping WMDs will be used in that scenario as the humanity angle doesn't count.
EMP shielding will weigh down and ruin the first gen of small mech tech auto weapons if it has to be applied retroactively.
Microwave, magnetic, also common sense counter measures like security foam could be splurged at drones.

Cost vs effectiveness is the equation to considered. Are smart exploding mini drones that choose the highest ranking target better than the 100x as many toe poppers you could buy with the same money?
Depends on who you are and how much money you have. America will be paying millions for a smart target choosing weapon but jihaddis will just send a suicide bomber (i.e. A smart target choosing weapon system) that costs next to nothing.

The creators do not even fully comprehend what is going on in these systems once they have been programmed. For all we know, the success with Go could've been purely mathematically, its only in the absence of that knowledge that we let ourselves assume it possessed intuition, or foresight.

Rock paper kiwi?

Turing dating test, if it can convince me it's a snobby white girl, I'll consider it over.

>when you regularly play with tinder bots and they regularly fail Turing tests
That being said, I probably Turing tested a few real people who were just too lazy/unintelligent to seem like a real person with a real personality, women are just like they when things are too easy for them.

If computers are so smart, they would learn of the outcome of any war via simulations.
Hence, by these simulations economic loss could be calculated. Once two country's declare was on eachother a simple recipt will print out telling the loser how much they owe the winning country.

>no one is impressed by that.
That's a mistake and a potentially deadly one if we're talking about war. It's indicative of a lack of understanding of what these tools can and cannot do. People's expectations are shaped by (((Hollywood))) portrayals of AI or click-bait headlines. For basics, people can start with theory of computation (finite state machines to Turing machines).

More fun is to play with tools like Conway's Game of Life or Jeffrey Ventrella's GenePool. They're free simulators and you can learn a lot intuitively.

>Obviously very useful software but people here think they'll command platoons of bots if they sign up to the special forces.
Any conventional forces would get rekt by Korean middle school StarCraft players controlling swarms of armed drones.

To not be able to extrapolate AI to military applications is just making you seem like someone stuck in the past.

Here are a portion of options for a drone swarm scouting a war zone and coming under fire:

If attacked{
determineEnemyLocation() //use sensors and shit to triangulate enemy instantly from drone swarm

decide(ROE and enemy location and firepower info as input){
1. Immediately direct retaliation
2. Drone flanking maneuver
3. Check with HQ to see if friendlies near by then retaliate or dont
4. Retreat and inform HQ of enemy position
}
}

There you go, basic AI, though one used for real would have a million times more complexity.

They don't need to pass as human to make sound tactical decisions. Why would you ever think that?

We just need a brain simulated on a exascale* supercomputer, at electric circuit speeds. Something like that would have all the time in the world to be educated and devise general AI. It's also reasonably possible in the next few decades with the continued increase in computing power.

Google has been looking at Starcraft or another RTS for more research, yeah. There's some articles floating around talking about how much more difficult it is when the machine actively has to plan and account for unknowns, as opposed to an open knowledge board.

>The creators do not even fully comprehend what is going on in these systems once they have been programmed.
Exactly. The computer came up with brilliant solutions that no human professional was able to think of. It's games are still a puzzle to pro players today. Humans still make the decisions and set the agenda. In the CQB training example, the computer will also come up with brilliant solutions. It doesn't need to know what it's doing or how it does it. That's our job as humans. The AI is our slave.

>For all we know, the success with Go could've been purely mathematically, its only in the absence of that knowledge that we let ourselves assume it possessed intuition, or foresight.
It wasn't done "mathematically". AlphaGo used a neural network. Look into their Nature publication.

Can't find any videos of its matches except by some Spanish dude, but its BNET account is here.

us.battle.net/sc2/en/profile/3041644/1/AlphaGo/

Given unlimited APM it could brute force it's way through, but considering they're limiting it's APM to Human levels it'll be interesting to see how long it takes for it to reach Grand Master levels.

Neat.

IIRC it's also being forced to read raw pixel data to see where the units/buildings are, which further limits it.

Unlimited APM is huge, though, definitely. There's some videos of some dudes that were training a machine on SC1 to perfectly micro units to split and micro against enemy forces. Was pretty amusing watching it annihilating much larger enemy forces.

People can extrapolate if they try but they don't want to lose their cushy retail/service job to a kiosk so they envelope themselves in wilful ignorance and pretend AI "can't work" and humans are still necessary for EVERYTHING.

100+ years ago they'd complain that the horseshoe industry would go out of business because of cars.

robots are too easy to destroy

if the goal of the war is not economic victory but total elimination of the enemy then? or conversion of the enemy to a certain religion?

Not so easy to print a receipt for that, plus who's going to accept the receipt a machine calculates?

The metagame is interesting too: Besides two or more strategic competitors with AI swarms of dangerous drones with "perfect" micro, there's the electronic warfare aspect. If my enemies have AI drone swarms, how can I disrupt them? How far can I escalate before my actions justify the enemy using a nuke? Or vice versa. That's where the humans come in.

the first step is the invention of quantum computing, something that might even happen in the next 3 to 5 years

it will be something that will reshape technology on a completely new level

ethic issues we got now will fall completely short to the new plethora of issues such as transhumanism etc

Ultimately it comes down to this: Who gets what, where, when, how, and why? If two groups of people have a fundamental disagreement about the answer they will be strategic competitors at best or at worst go to war. The war ends when one side gets to live out their answer to the question. Sometimes it may be necessary to completely eliminate the other side (like radical islamic terrorism).

At that point it could just be a literal production war of whichever military could shit out their combat drones faster in order to overwhelm the other. Resources in effectiveness out.

And yeah, presumably people would be fearful (rightfully) enough to keep humans in the decision making at least to some degree. Have a hard time imagining otherwise.

That's different because it has a end objective.
Sure you could leave a rumba in a room and let it bump around a room until it finds the door but that doesn't mean it can solve any other problem other than that specific task.

That video is only applicable if the computer can use what it's learned from Mario on say Halo.
The A.I. must be capable of abstract thought.
It's funny when you apply this mode to humans because sometimes you find someone has come up with a misconception and has kept it long into adulthood.
These are intellectual blind spots, A.I. is especially bad at pruning irrelevant knowledge.

>At that point it could just be a literal production war of whichever military could shit out their combat drones faster in order to overwhelm the other. Resources in effectiveness out.
The fundamentals of war are the same: Resources. Who gets what, where, when, how, and why? Production of implements of war and trained personnel to use it.

EMP is all the more reason to KEEP humans in the mix. AI swarms will just be another step in the escalation ladder to full blown strategic nuclear war. The more steps in the escalation ladder a leader posseses, the more likely he'll be able to keep his nation out of a MAD scenario. Russia's "hybrid wars" are successful because they're effective and still below NATO's Article 5 threshold. Same thing with China vis-a-vis Southeast Asia. Obama was too much of a cuck to do anything about it even though top generals were telling him the very same things we're posting in this thread.

>That video is only applicable if the computer can use what it's learned from Mario on say Halo.
Google Deep Mind already achieved this with a computer learning purely from the pixels it "sees". Look at their Jewtube channel of it playing various Atari games.

It will be a long time before those machines are cheap enough to actually be used in war.
It costs far less money to send in grunts than top of the line tech

Are you saying that the A.I success rate from the previous game carried over to a different game of a similar genre?

For instance Mario and Kirby?

If the A.I. had to start from scratch to achieve it's objective then it didn't do what it was designed to do.
Humans can take skills learned from one game and apply them to another, in other words all first person shooters are Doom clones.

>The fundamentals of war are the same: Resources. Who gets what, where, when, how, and why? Production of implements of war and trained personnel to use it.
Good point.

>mfw reading this thread
>mfw half the people in here don't know what they're talking about

DeepMind is the game changer.

Those grunts have to be trained, housed, fed, and given reparations if they are wounded or killed. Funerals are expensive, too.

That's definitely the case right now but things could quickly change in a decade or three.

>Are you saying that the A.I success rate from the previous game carried over to a different game of a similar genre?
Not entirely sure what you mean user, but the kicker is this: The programmers didn't change the program from one game to the next. They gave the program one goal: get the highest score you can. The program was able to play Atari games by actually playing. It played randomly or retarded in the beginning and then achieved superhuman strategy and tactics later on.

>If the A.I. had to start from scratch to achieve it's objective then it didn't do what it was designed to do.
The programmers designed it to learn from "seeing" the pixels directly instead of from the game's code and they achieved that. There may be other programmers who are willing to try other goals. These guys work with neural networks and they published and peer reviewed papers on the Deep Mind site.

>Humans can take skills learned from one game and apply them to another, in other words all first person shooters are Doom clones.
I don't think AI is there yet. This is why we still need humans.

In the US I blame niggers for mucking up our education system. We'd be at Singapore tier math and science if it weren't for niggers/spics and (((Hollywood))) giving people false sense of what AI is.

I meant does the A.I. use all the stuff it learned from it's previous game or does it start from scratch?
If it's to successfully emulate a human characteristic then it should be able to use all of it's experience to learn a new game environment.

It will be a nuclear wasteland where these robots hunt surviving humans. The elites live in bunkers. There is nowhere to escape.

No it started from scratch from game to game. Applying Mario to Final Fantasy is still a human thing.

I sat next to a smelly dark Singaporean today in class.

Everything is started from scratch, so we can see how the A.I. learns. We could make it so that A.I. could learn almost instantly with the experience, but where's the fun in that?

Deep Learning works in a way that you could essentially load up things that have been learnt in the past, and apply it to the present. The main goal now, is to see how the A.I. learns without any aid from humans or machine code.

>If it's to successfully emulate a human characteristic
Ask yourself why you would want to emulate human characteristics in war? That's what humans are for. If our goal is to augment our personnel so we can win, we don't want to emulate human characteristics. We want AI that will be able to do a repetitive, iterative, or algorithmic task as efficiently as possible. That way the humans can do more important things.

Emulating human characteristics are for service jobs, call centers, help desks, video game AI, or AI waifus. These things are important but this thread us about war.

>I don't think AI is there yet. This is why we still need humans.
What I want to see is a A.I. that
1. plays one game, and get proficiently good at it.
2. plays a similar game with slightly different game play mechanics
3. returns to the first game and back again
4. after having done this a couple time it then realized ether that they are separate games, or that they should behave differently in the two different games

Lets say one game has usable doors and the other doesn't.

The A.I. still recognizes the doors in both video games

Will it attempt to open both the real and fake doors in ether video game. . .

Or will it not attempt to open any doors because it can't in one video game?

Or will it know that only in this video game it can open doors and not make an attempt in the other video game because it knows it can't?

>Within most of our lifetimes a country will use a fully autonomous (e.g., machine learning-based) weapons system.
We already have them. Wicked good ones. The older ones are open source/public domain now. >There are treaties to prevent this
no there aren't. At least we never signed one

I understand, but robots still need things like self preservation to mitigate costs.
If we had robots that ran into combat without fear of being blown up by a mortar shell then it would be less effective.
A robot should know to only proceed when it is safe within a reasonable tolerance.
Also, all robots in a combat situation would need to be linked to learn from fallen robot's mistakes and also to have a active intelligence information about the location of targets.

WYSIWYG.

>robots
>running

Why would you make something that has legs when it could fly?

Flying takes a lot of energy, and sometimes you need a robot that can open doors and go up stairs.
Sure it shouldn't be our first goal but it should be a goal.
Tanks with A.I. would be great and fairly easy to get working.
But so much of our combat is done in a urban environment.