Google Pledges Not to Use AI for Weapons or Surveillance - NBC4 Washington
National & International News
The day’s top national and international news

Google Pledges Not to Use AI for Weapons or Surveillance

The search giant had been formulating a patchwork of policies around these ethical questions for years but finally put them in writing

    processing...

    NEWSLETTERS

    Google Pledges Not to Use AI for Weapons or Surveillance
    Jeff Chiu/AP, FIle
    In this May 8, 2018, file photo, Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif.

    What to Know

    • Google CEO Sundar Pichai said in a blog post that the company is committed to building "socially beneficial" AI.

    • Google also intends to avoid creating or reinforcing bias, Pichai said.

    • The CEO didn't specify how Google or its parent Alphabet would be accountable for conforming to the principles.

    Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates international norms, or that works in ways that go against human rights. It planted its ethical flag on use of AI just days confirming it would not renew a contract with the U.S. military to use its AI technology to analyze drone footage.

    The principles, spelled out by Google CEO Sundar Pichai in a blog post, commit the company to building AI applications that are "socially beneficial," that avoid creating or reinforcing bias and that are accountable to people.

    The search giant had been formulating a patchwork of policies around these ethical questions for years but finally put them in writing. Aside from making the principles public, Pichai didn't specify how Google or its parent Alphabet would be accountable for conforming to them. He also said Google would continue working with governments and the military on noncombat applications involving such things as veterans' health care and search and rescue.

    "This approach is consistent with the values laid out in our original founders' letter back in 2004," Pichai wrote, citing the document in which Larry Page and Sergey Brin set out their vision for the company to "organize the world's information and make it universally accessible and useful."

    WH: Cannot Guarantee Trump Didn't Use N-Word

    [NATL] WH Defends Trump's 'Dog' Comment, Says They Cannot Guarantee Trump Didn't Use N-Word

    The White House defended President Donald Trump calling former protégée Omarosa Manigault-Newman a "dog" in a Tuesday press conference. Press secretary Sarah Huckabee Sanders also could not guarantee that Trump has never used the N-word on record, but doubled down in his defense. 

    (Published Tuesday, Aug. 14, 2018)

    Pichai said the latest principles help it take a long-term perspective "even if it means making short-term trade-offs."

    The document, which also enshrines "relevant explanations" of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May.

    Some ethicists were concerned that call recipients could be duped into thinking the robot was human. Google has said Duplex will identify itself so that wouldn't happen.

    Other companies leading the race developing AI are also grappling with ethical issues — including Apple, Amazon, Facebook, IBM and Microsoft, which have formed a group with Google called the Partnership on AI.

    Making sure the public is involved in the conversations is important, said Terah Lyons, director of the partnership.

    At an MIT technology conference on Tuesday, Microsoft President Brad Smith even welcomed government regulation, saying something "as fundamentally impactful" as AI shouldn't be left to developers or the private sector on its own.

    Bridge Collapses Over Italian City, Killing More Than 20

    [NATL] Bridge Collapses Over Italian City, Killing More Than 20

    A bridge over the Italian city of Genoa collapsed during a sudden, violent storm, opening up a huge gulf in the Morandi Bridge and killing at least 20 people.

    (Published Tuesday, Aug. 14, 2018)

    Google's Project Maven with the U.S. Defense Department came under fire from company employees concerned about the direction it was taking the company.

    A company executive told employees this week the program would not be renewed after it expires at the end of 2019. Google expects to have talks with the Pentagon over how it can fulfil its contract obligations without violating the principles outlined Thursday.

    Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. What's more, letting the contract expire was fundamental to Google's business model, which relies on gathering mass amounts of user data, he said.

    "They're a company that's very much aware of their image in the public conscious," he said. "They want people to trust them and trust them with their data."