Philosophy, Ethics, and Safety

The Ethics of AI

AI is just as any technology, morally neutral… but very powerful

  • Microsoft’s AI for Humanitarian Action: natural disasters, addressing the needs of children, protecting refugees, and promoting human rights

  • Google’s AI for Social Good: supports rainforest protection, human rights jurisprudence, pollution monitoring, measurement of fossil fuel emissions, crisis counseling, news fact checking, suicide prevention, recycling, etc.

  • University of Chicago’s Center for Data Science for Social Good: criminal justice, economic development, education, public health, energy, and environmental concerns

Already, optimizing business tasks with AI can improve productivity, which increases wealth and provides more employment

Automation can replace boring or dangerous tasks that humans must do, which frees humans up for more interesting tasks.

AI-assistance has the potential to assist those with poor vision, hearing, and mobility

However, there can be negative side effects…

Much the same as with nuclear power, the internal combustion engine, plastics, telephones…

Automation creates wealth… but currently that flows to the top

UK Engineering and Physical Sciences Resarch Council developed a set of “Principals of Robotics”, other organizations have done other things. What ethical principles do you find most important?

Common Principles
Ensure Safety Establish Accountability
Ensure Fairness Uphold Human Rights and Values
Respect Privacy Reflect Diversity/Inclusion
Promote Collaboration Avoid Concentration of Power
Provide Transparancy Acknowledge Legal/Policy Implications
Limit Harmful Uses of AI Contemplate Implications for Employment

Hot Fields

Lethal Autonomous Weapons

UN: “one that locates, selects, and engages, human targets without human supervision”.

  • Land mines (since 17th century) (banned under Ottawa Treaty [guess who’s not signatories?])

  • Guided missiles (since 1940s) (must be fired by human)

  • Radar-controlled guns (since 1970s)

Since 2014, there’s been a discussion on autonomous weapons, with 30 countries (inc. China to the Holy See) declaring support for a ban while others (Israel, Russia, South Korea, US) are currently opposed to a ban.

Many find is either morally reprehensible or technically irresponsible to give decision to kill a human or not to a machine.

However, it can easily be argued that well-developed machines are better than human soldiers, there’s no option for: fatigue, frustration, hysteria, fear, anger, revenge, etc…

The difference between visual tracking, navigation, flight plans for a pizza delivering quadcopter and a bomb delivering one is slim.

Cookie Drone

Surveillance, Security, Privacy

Joseph Weizenbaum (1976): automated speech recognition tech could lead to widespread wire-tapping…

350M surveillance cameras in china and 70M in US. The data is certainly available, are you?

More people than ever shop online, we become more vulnerable to cybercrime, which can be enhanced by AI.

Defense is also available (anomalous activity detection.

In 2000, Latanya Sweeny showed that if given DOB, gender, ZIP, but not Name, SSID, address, 87% of the US could be uniquely re-identified.

Netflix Prize: Competitors were given de-identified movie reviews and asked to find a ML algorithm to recommend movies… however researchers were able to re-identify the users, sometimes down to the name.

There’s a lot of work being done which can still allow development of useful tools, without the danger of mass identification.

Fairness and Bias

How can I decide… loan approval, police patrol routes, is pretrial release or parole allowed?

We all have a moral obligation to be fair and to produce fair systems (AI or not) however… what is fairness?

  • Individual Fairness:

    • any one person is treated as any other regardless of class
  • Group Fairness:

    • Any two classes are treated similarly (requires summary statistics)
  • Fairness through unawareness:

    • Removing data we fear might discriminated over
  • Equal Outcome:

    • “as long as the statistics are equal”
  • Equal Opportunity:

    • If you’re qualified, you’re qualified
  • Equal impact:

    • Similar likelihood to succeed should be classified the same

Trust and Transparency

How do we trust something new?

…airplanes, digital gas pumps, elevators, electric stoves, fire alarms…

Is certification enough?

What about transparency? How much can we provide?

How accessible should audits be?

“Red Flag Law”

The Future of Work

“For if every instrument could accomplish its own work, obeying or anticipating the will of others …if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves. —Aristotle

It’s not a question if automation will reduce employment. The question is if the “compensation effects” will make up for the reduction.

In the 1810s, weavers were replaced by automatic looms, leading to the Luddites

They weren’t anti-technology, they just wanted machines to be used by skilled workers, producing high-quality goods with good pay, rather than unskilled workers, producing low-quality goods, at low pay.

Similar effects happened in the 1930s, Keynes called technological unemployment

However, it also changes work, bank tellers now spend less time counting money than they do using advanced business skills.

Also important are the pace of change and how that interacts with technological income inequality (“Winner-take-all”)

Robot Rights

To be clear, I’m not kidding.

If AI has no consciousness, no qualia, then we are not obligated to provide rights.

But what if they do? Is a self-driving car destroyed in a crash eligible for a vehicular manslaughter charge?

If robots have rights, can they be enslaved? Should they be able to vote? How many votes? What if I copy their software, or if they do?

If a self-driving car crashes itself, who’s at fault? Is it property or person?

AI Safety

We’ve long since feared our own autonomous creations (The Modern Prometheus)(1818)

Partially because “the Stranger” is scary, like ghosts, witches, space aliens, etc.

In general, it’s unethical to produce dangerous AI agents, to benefit and not harm, to be resistant to tampering and abuse, etc.

Safety engineering is a field that already exists and is very relevant.

Historically, software engineering involves verifying correctness, not safety

Beware unintended side effects

“Low Impact” is a potential approach (maximize the utility minus total effect on the world)

Beware the tragedy of the commons by externalities

Take great care with the midas problem

Quiz time!