Google Decides to Drawback from Military Artificial Intelligence Project

Publish Date : 2018-06-04

Employees at Google got to know that the internet titan will recede from a contract to assist the United States military use Artificial Intelligence (AI) to evaluate the video clip taken from a drone. This resulted post an outcry from internal staff, as per the reports.

The association with the United States Department of Defense seemed to have triggered agitation within the California-based company. A petition within the company reaching out to Google to remain out of “the business of war” accumulated several signatures and it has been found that a few employees resigned in order to protest an alliance with the military.

The tech news website Gizmodo along with New York Times quoted miscellaneous sources stating that an executive from the Google’s cloud team told its employees on Friday that the company will not look to refurbish the contentious deal post it expires the coming year.

The deal was stated to be valued at less than $10 million to Google, however, was believed to have the prospective to conduct more positive associations based on technology with the military. Nevertheless, Google did not reply to this.

Related to the Project Maven, Google has always stayed tight lipped; the project allegedly makes use of engineering talent and machine learning to differentiate objects and people in drone videos for the Defense Department.

The petition from the employees states that, "We believe that Google should not be in the business of war."

"Therefore, we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology." The internet rights group, the Electronic Frontier Foundation and the International Committee for Robot Arms Control (ICRAC) were a few amongst those who have assessed in with support.

In an open letter, ICRAC stated that, it will be quite irresistible to weaken of eject human review and oversight for these systems, since military commanders come to view the object recognition algorithms as reliable.

Google has gone on the record stating that its work to enhance machines’ potential to determine objects is not meant for offensive uses. It said, "We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control."

The Electronic Frontier Foundation (EFF) and others emphasized the necessity for ethical and moral frameworks pertaining to the usage of artificial intelligence in weaponry.

The EFF in a blog post on this topic stated that, "The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety.”