Tuesday, September 4, 2012

Reading #2:  Protecting Artificial Team-Mates:  More Seems Like Less

Introduction

When browsing the CHI papers list, I was intrigued to see a section called "Understanding gamers".  I am nothing, if not a gamer (aside from programmer, musician, fencer), so I chose the first one in the list of papers for this topic that seemed applicable to me.  The first paper in the list met that criteria, and it was written by researchers at the National University of Singapore.

Tim Merritt was a PhD student, whose research generally consists of trying to create interfaces that enable better communication.  He now works at Aarhus School of Architecture in Denmark as an assistant professor.

Kevin McGee is an associate professor in the department of communications and new media.  His work generally revolves around end-user programming, artificial intelligence, and cognitive science.

Summary

This paper discusses how human players treat their team mates differently depending on if they are human or computer.  Previous studies indicated that cooperating humans increased enjoyment because of the presence of social cues, while competing with humans caused increased immersion in the game due to a greater sense of social presence.  The paper cites a study that had a human player play a cooperative game with an AI.  The player reported frustration with the AI, and unfairly blamed it for team mistakes.  Afterwards, the human was paired with the same AI, but told that they were playing with a human teammate this time.  The player reported feelings of better teamwork, and felt that the teammate was taking more risks on behalf of the team.

Using public bathrooms is always a risk
The study had a person work with two AI teammates for a cooperative game.  The player, however, was told that one of the teammates was a human, and that the other was an AI.  The three players were supposed to work together to "capture" a gunner without getting shot.  The gunner was also an AI, and it was captured by being tagged in the back.  Each teammate could attract the attention of the gunner by "yelling" at it, which increased that chance that it would turn to face that character.  Each time the human player yelled, whichever target the gunner was facing would mark that the player saved them.  

Related Work

The research that relates to this topic can be directly related to how humans interact with artificial intelligence, and there are plenty of papers that cover such a topic:

Human- centered design in teammates for aviation:  https://www.aaai.org/Papers/FLAIRS/2003/Flairs03-011.pdf
Paper discussing how to make robots able to work with humans:  https://willowgarage.com/sites/default/files/TakayamaRSSworkshop.pdf
Paper written by the same authors expanding on the subject:  http://scholarbank.nus.sg/handle/10635/33276
Paper discussing how realistic AI opponents increase enjoyability of a game:  http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.173.6725&rep=rep1&type=pdf
Paper written by one of the author's of this paper about AI that more effectively works as a teammate:  http://dl.acm.org/citation.cfm?id=1822365
Another paper by Merritt discussing AI teammates as a scapegoat:  http://dl.acm.org/citation.cfm?id=1958945
Discusses how people can feel a "social presence" based on whether or not they consider an avatar to be a person or not:  http://www.mitpressjournals.org/doi/abs/10.1162/105474603322761289
The original HCI paper, in which Alan Turing discusses if a person can consider a machine to be human:  http://www.loebner.net/Prizef/TuringArticle.html
About whether or not computers can be given personalities to help human's perception of them:  http://delivery.acm.org/10.1145/230000/223538/p228-nass.pdf?ip=128.194.131.80&acc=ACTIVE%20SERVICE&CFID= 112256229& CFTOKEN=80613711&__acm__=1346965032_28c0d2e177550cce4c53cd353531fbdf
About the uncanny valley: http://www.coli.uni-saarland.de/courses/agentinteraction/contents/papers/MacDorman06short.pdf

Each of these topics are directly related to this paper because they all involve how a human perceives a computer personality, and how that person's behavior is affected.

Evaluation

After the game, the researchers obtained qualitative subjective information regarding the player's perceived interaction with the AI.  Each player was interviewed and reported that they yelled more for the human player than for the computer player.

Why, that's computer discrimination!
Quantitative objective data gathered during the game indicates the real twist:  This is the OPPOSITE of the actual results;  the player yelled for the perceived computer player about 30% more than for the perceived human player.

That's...reverse discrimination...
Explanations for this have to do with brain imaging studies, which suggest that altruism feels more emotionally fulfilling for humans than for AI, so it was probably weighted more in the players' memories.

Discussion

While interesting, this study had little practical purpose, aside from answering questions about how people subconsciously felt about computers.  The study is not particularly novel, but it was an interesting read nonetheless.

No comments:

Post a Comment