Revamped Turing test expects computers to show imagination

In June, the developers of a Russian chatbot posing as a 13-year-old boy from Ukraine claimed it has passed the Turing test. While a lot of people doubt the result’s validity because the testers used a sketchy methodology and the event was organized by a man fond of making wild claims, it’s clear we need a better way to determine if an AI possesses human levels of intelligence. Enter Lovelace 2.0, a test proposed by Georgia Tech associate professor Mark Riedl.

Here’s how Lovelace 2.0 works:

For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

Okay, so that official description is pretty hard to parse. Thankfully, Riedl’s recently published paper about the subject gives us an easy sample test. One could, for instance, ask a computer/software to “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.” The story doesn’t have to read like an instant classic, but it has to be able to fulfill those conditions and convince a human judge that its tale of alien abduction and female-feline heroism was written by a person in order to pass. That’s just one possibility, though — testers could also ask the computer to create other types of artwork (painting, sculpture, etc.) while fulfilling a set of conditions. These conditions need to be outrageous or unique enough to prevent the computer from finding possible results to copy through Google. In comparison, a machine merely has to convince someone that it’s talking to another person in order to pass the Turing test.

Reidl’s idea stemmed from the original Lovelace exam created in 2001, which requires computers to conjure up a novel, painting or any original work of art. For a computer to pass, its creators must not be able to explain how the machine came up with its creation. History buffs might have already guessed that both were named after Ada Lovelace (above), the world’s first computer programmer, who once said that “computers originate nothing; they merely do that which we order them, via programs, to do.”

The assistant professor decided to design the second Lovelace exam, as he believes the original one makes it hard to judge if a machine has truly passed, since it doesn’t have measurable parameters. In the sample test for Lovelace 2.0, for instance, those parameters are the elements of the story the machine needs to use. Riedl will talk about Lovelace 2.0 at the Beyond the Turing Test workshop in Texas in January 2015, but you can already read his paper online if you want to know more.

[Image credit: Alfred Edward Chalon/Wikimedia]

Filed under:


Source: Mark O. Riedl (PDF), Georgia Tech

Source: Engadget - Read the full article here

Author: Daily Tech Whip

This article is part of our 'News Tiles' service. The site is currently in Beta. When it is fully operational you will be able to search through and arrange the 'Tiles' to display a keyword, product or technology over your chosen time period. For example you would be able to display all of the leading tech articles on the new Kindle Fire, in one spot in real time. You will also have access to our own original reporting and analysis as well as a polished place to post your own thoughts & reviews here, amongst the Daily Tech Whip Community. Please let us know if you have any feedback via the contact form or via Twitter. Don't forget to come back next week and see our full site and claim your name and your own free tech blog.

Share This Post On