Philosophy, Ethics, and Safety

The Limits of AI

Weak AI vs. Strong AI

  • Weak AI: “Machines acting as if they were intelligent”

  • Strong AI: “Machines are thinking, not just acting as if they were”

  • “Strong AI” later refers to “human-level” AI

“Aerial flight is one of the great class of problems with which man can never cope” –Simon Newcomb (two months before the Wright Bros. flew at Kitty Hawk)

Is Turing right again?

Turing argues “from informality” that human behavior is just “too complicated” to ever be encoded in a machine

“Artificial Intelligence pursued within the cult of computationalism stands not even a ghost of a chance of producing durable results” –Kenneth Sayre (1993)

They were mostly referring to Good Old-Fashioned AI (GOFAI)

The qualification problem describes the problem of having to describe every rule and every state, every contingency.

However… probabilistic systems have shown to be effective in open-ended domains. Deep learning also can capture irregular and even unknown rules.

Hubert Dreyfus (What Computers Can’t Do [1972] and What Computers Still Can’t Do [1992])

\[ Dog(x) \implies Mammal(x) \]

Can never be as good as a human’s lived experience.

Andy Clark (1998): “Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings.” … “Good at Frisbee, bad at logic”

Embodied Cognition approach claims that “cognition” and a “body” can’t really be considered separately. “Cognition happens within a body”

Argument from Disability

“A machine could never do X” What do you think machines could never do?

Turing:

  • Be kind

  • Be resourceful

  • … beautiful

  • … friendly

  • have initiative

  • have a sense of humor

  • tell right from wrong

  • make mistakes

  • fall in love

  • enjoy strawberries and cream

  • make someone fall in love

  • learn from experience

  • use words properly

  • be the subject of your own thought

  • have as much diversity of behavior of man

  • do something really new

Well… if you say so…

  • “Make mistakes” check

  • Computers can be enabled with metareasoning, thus “be subject to their own thought

  • “… fall in love with it”, teddy bears and other toys

  • David Levy predicts that by 2050, humans will routinely fall in love with humanoid robots

  • Robots falling in love is common in science fiction (though with sparse academic study) Funny story…

  • Computers certainly have discovered new things (astronomy, mathematics, chemistry, mineralogy, biology, computer science, and art!)

  • Sometimes AI is better, sometimes worse than humans.

  • They can never be exactly human

Mathematical Objection

Godel and Turing both proved that certain mathematical questions are unanswerable under the formal systems used to construct them.

In short:

It is possible to construct a “Godel sentence: \(G(F)\) such that:

  • \(G(F)\) is a sentence of \(F\) , but cannot be proved within \(F\)

  • If \(F\) is consistent, then \(G(F)\) is true

Some academics find this a compelling reason to believe that AI/Machines have an intrinsic limitation when compared to humans

I.E. machines cannot establish the truth of their own Godel sentence, while humans can.

Queue academic gangland:

Sir. Roger Penrose advances this idea in “The Emperors New Mind” he argues that the known laws of physics are insufficient to explain consciousness.

“…[he argues] humans are different because their brains operate by quantum gravity— a theory that makes multiple false predictions about brain physiology”

The authors cite three problems:

  • Adam cannot assert that this sentence is true

  • Godel’s IT applies to mathematics not computers

    • Nothing and noone can prove something that is impossible to prove

    • “Humans must be consistent”… no?

  • Also, the IC only was proved for systems powerful enough for mathematics, which includes Turing Machines

    • Except Turing Machines != Computers

    • Turing Machines are infinite, brains and computers are not

    • “Humans can change their minds, well so can computers”

Measuring AI

Turing Test is perhaps the most famous test for a thinking machine

  • Typed conversation between human and machine

  • For five minutes

  • With the program fooling the interrogator 30% of the time

This is much easier than you might think…

In 2014 “Eugene Goostman” fooled 33% of “untrained judges” in a Turing test

So far, no “trained judge” has been fooled

However… nobody really cares.

Can machines really think?

Do submarines swim?

Do planes fly?

Turing again: polite conversation (ignoring philosophical zombies)

The Chinese Room

John Searle rejects polite conversation and proposes a thought experiment

…they’re made out of meat (Bisson 1990)

Consciousness and Qualia

The line drawn through every debate on “Strong AI” is consciousness

Awareness of the world, yourself, and the subjective experience of living.

Qualia (“of what kind”?) Do machines experience qualia?

What does 404 feel like?

Do your pets have consciousness?

Oven crickets? But do they feel it?

Turns out, it’s difficult to define (and therefore prove and demonstrate)

Though there are those (Templeton Foundation) who are doing experience to find out…

Turing again: “I do not wish to give the impression that that I think there is no mystery about consciousness… But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned with in this paper.”

Humans can easily compare our own experiences with others… machines cannot easily do this.

Machines can share code… humans cannot easily do this…