Google employees resign in protest over controversial Pentagon AI project, report says


Recommended Posts

About a dozen Google employees are resigning in protest over the tech giant’s involvement in Project Maven, a controversial military program that uses artificial intelligence, Gizmodo reports.

 

Project Maven, which harnesses AI to improve drone targeting, has been a source of concern for a number of Google employees. Last month, over 3,100 Google workers signed a letter addressed to the company’s CEO, Sundar Pichai, asking him to pull the tech giant out of the project.

 

Announced last year, Project Maven is designed to swiftly pull important data from vast quantities of imagery.

 

The resigning employees’ concerns range from ethical worries about the use of AI in drone warfare to qualms about Google’s political decisions and a potential erosion of user trust, according to Gizmodo.

 

The tech news website cites an internal Google document containing written accounts from many of the employees that details their decisions to leave. Multiple sources have reportedly shared the document’s contents with Gizmodo.

 

The Mountain View, Calif.-based firm is said to be using machine learning to help the Department of Defense classify images captured by drones.

 

More....

http://www.foxnews.com/tech/2018/05/14/google-employees-resign-in-protest-over-controversial-pentagon-ai-project-report-says.html

 

 

Many more people ready and willing to take those jobs.  As long as this is for offensive purposes, I have no problem with it.   Apple recently started working on drones for the US as well.   Diff reasons tho

  • Thanks 1
Link to comment
Share on other sites

AIUI Maven is intended to make more accurate the identification and classification of drone imagery; cars, buildings, and people, which in turn should help prevent the targeting Innocents.

 

If you're against that, don't let the door hit your ass....

  • Like 2
Link to comment
Share on other sites

Good riddance, working to help your own country's national defense should be seen as a privilege, not as something to be criticized.  I wonder if they protested Google helping China with AI as well or are they just hypocrites.

Link to comment
Share on other sites

And out of 3500 people who signed the petition, only 12 quit.    I will give credit to those who stand by their convictions, but leaving a good job at Google....mistake.  I am sure Google has their own normal turnover rate and have seen 12 people go at once before.  Just another day for them.

  • Like 3
Link to comment
Share on other sites

19 hours ago, slamfire92 said:

Shame people seem to have a problem with national defense.

And it's a shame some people seem to be OK with AI deciding who to kill...

 

The decision of who to kill and when in a military scenario should always be a human decision.

 

  • Like 2
Link to comment
Share on other sites

8 minutes ago, FloatingFatMan said:

And it's a shame some people seem to be OK with AI deciding who to kill...

 

The decision of who to kill and when in a military scenario should always be a human decision.

 

I agree but if they instructed the AI to kill this one particular person, provided the AI with pictures/info to identify that person, isn't it carrying out an order given to it by a human, thus making it a human decision?

 

I guess they could program the AI to txt the HMFIC for approval before making the kill shot if that'd make you feel better :)

Link to comment
Share on other sites

10 minutes ago, slamfire92 said:

I agree but if they instructed the AI to kill this one particular person, provided the AI with pictures/info to identify that person, isn't it carrying out an order given to it by a human, thus making it a human decision?

 

I guess they could program the AI to txt the HMFIC for approval before making the kill shot if that'd make you feel better :)

You're assuming the AI has targeted the right person.  How many times does SIRI screw up again? :p

 

Also, I firmly believe that no drone should be running on auto when it strikes its target. It should be human operated, even if remotely.  Has Hollywood taught us nothing?

 

Edit: A human has a vested interest in making sure he has the right target, as an error would lead to punishment. A machine doesn't care if it's right or not and just kills. How are you going to punish a machine if its target data was wrong?

 

Edit2:  The problem here is that most people don't understand what's known in AI circles as the "stop button" problem.  Here's an excellent link that explains it, after which you should understand why AI should never be given the ability to kill.

 

 

  • Like 2
Link to comment
Share on other sites

26 minutes ago, FloatingFatMan said:

And it's a shame some people seem to be OK with AI deciding who to kill...

 

The decision of who to kill and when in a military scenario should always be a human decision.

 

War isn't going away. By being able to better identify your targets, you ensure surgical attacks are more accurate so we have less blunders, like what we saw under the Obama administration. Humans are prone to error after all, but if we can have machine and humans making an assessment, I would imagine that improves your odds of succeeding in your mission. (correct me if this is incorrect though)

 

You made an argument for people being OK with AI deciding who to kill, but nowhere did anybody say that either.

Link to comment
Share on other sites

6 minutes ago, dead.cell said:

War isn't going away. By being able to better identify your targets, you ensure surgical attacks are more accurate so we have less blunders, like what we saw under the Obama administration. Humans are prone to error after all, but if we can have machine and humans making an assessment, I would imagine that improves your odds of succeeding in your mission. (correct me if this is incorrect though)

 

You made an argument for people being OK with AI deciding who to kill, but nowhere did anybody say that either.

That's where this technology is going, and its eventual purpose.  Only by speaking out NOW can we stop it.

 

I have no objection to AI being used to enhance other intelligence gathering, but it should never be the one making the final decision or carrying it out.

Link to comment
Share on other sites

13 minutes ago, dead.cell said:

War isn't going away. By being able to better identify your targets, you ensure surgical attacks are more accurate so we have less blunders, like what we saw under the Obama administration. Humans are prone to error after all, but if we can have machine and humans making an assessment, I would imagine that improves your odds of succeeding in your mission. (correct me if this is incorrect though)

 

You made an argument for people being OK with AI deciding who to kill, but nowhere did anybody say that either.

My thought is that this tech is probably being developed, or will be, with our without the US or US companies.   We either get with the times, or get left behind.  And as far as what I read, the AI is being used to ID targets and not given the right to ID, target, and excute without human intervention.  This is not a fully automated drone AI death machine.

 

https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

Quote

“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Cukor added. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they're doing now. That's our goal.”

 

On 5/14/2018 at 10:31 PM, DocM said:

AIUI Maven is intended to make more accurate the identification and classification of drone imagery; cars, buildings, and people, which in turn should help prevent the targeting Innocents.

 

If you're against that, don't let the door hit your ass....

?

Link to comment
Share on other sites

6 minutes ago, techbeck said:

“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Cukor added. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they're doing now. That's our goal.”

For now...  I have 2 words for you.  Feature creep.

 

Link to comment
Share on other sites

3 minutes ago, FloatingFatMan said:

For now...  I have 2 words for you.  Feature creep.

 

Possibly, but not right now.  I am also not going to make assumptions.

 

Fact is, this tech will help ID proper targets and less casualties to innocents.  The human component currently makes a lot of mistakes here and can only get better with some tech supporting it.  If it becomes fully automated later on, then that is a diff discussion.

Link to comment
Share on other sites

16 minutes ago, FloatingFatMan said:

That's where this technology is going, and its eventual purpose.  Only by speaking out NOW can we stop it.

 

I have no objection to AI being used to enhance other intelligence gathering, but it should never be the one making the final decision or carrying it out.

You keep framing your argument around this because it's the discussion you want to have. I'm not about to entertain it though because that's all based on assumptions and hypotheticals. That's a discussion of conspiracy.

 

13 minutes ago, techbeck said:

My thought is that this tech is probably being developed, or will be, with our without the US or US companies.   We either get with the times, or get left behind.  And as far as what I read, the AI is being used to ID targets and not given the right to ID, target, and excute without human intervention.  This is not a fully automated drone AI death machine.

 

https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

 

My worry is getting the wrong people behind the tech, who don't know what they're doing. Google has a lot of experience in this regard, and I'd rather solid minds be behind the ability to identify people, than probably anyone else.

 

Not a Google fan either, but I believe we can objectively agree where they sit in the realm of machine learning. Besides, spying seems to historically be Google's strong point. :p

Link to comment
Share on other sites

3 minutes ago, techbeck said:

Possibly, but not right now.  I am also not going to make assumptions.

 

Fact is, this tech will help ID proper targets and less casualties to innocents.  The human component currently makes a lot of mistakes here and can only get better with some tech supporting it.  If it becomes fully automated later on, then that is a diff discussion.

Having worked on military contracts in the past, I know  how those buggers think, despite what they tell you their aims are.  Rest assured, "terminator" style drones is where they're going with this crap.  Bad idea.

Link to comment
Share on other sites

5 minutes ago, FloatingFatMan said:

Having worked on military contracts in the past, I know exactly how those buggers think, despite what they tell you their aims are.  Rest assured, "terminator" style drones is where they're going with this crap.  Bad idea.

Assumptions is all this is.  I currently work with gov contracts and this is not how things always go.  And if the US did not develop this, someone else will and I am betting others are currently working on it.

 

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, FloatingFatMan said:

And it's a shame some people seem to be OK with AI deciding who to kill...

 

The decision of who to kill and when in a military scenario should always be a human decision.

 

 

Agreed, but you seem to be assuming it's the AI which will make the decision in Maven equipped systems which is not the case.

 

Current US policy is that the decision to fire requires a "human in the loop," but an autonomous hunter-killer policy by China, Russia, or others and all bets are off.

 

Another issue is that humans won't be fast enough to make decisions when fighters, bombers etc. are maneuvering and fighting at Mach 5+.  Hypersonic vehicles, civilian and military, are very close.

Link to comment
Share on other sites

14 minutes ago, DocM said:

 

Agreed, but you seem to be assuming it's the AI which will make the decision in Maven equipped systems which is not the case.

 

Current US policy is that the decision to fire requires a "human in the loop," but an autonomous hunter-killer policy by China, Russia, or others and all bets are off.

 

Another issue is that humans won't be fast enough to make decisions when fighters, bombers etc. are maneuvering and fighting at Mach 5+.  Hypersonic vehicles, civilian and military, are very close.

Like I said, as long as humans are in the loop, I'm OK with it.  Once they're taken out of the picture,only bad things will happen.

16 minutes ago, techbeck said:

Assumptions is all this is.  I currently work with gov contracts and this is not how things always go.  And if the US did not develop this, someone else will and I am betting others are currently working on it.

 

 

Assumption? Yes, but an informed assumption based on experience. Are your gov contracts actual military ones, involving actual hardware used in combat? Mine were.

 

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.