Thursday, September 20, 2012

A Chapter by Chapter Opinion of "The Design of Everyday Things", By Donald Norman

As assigned to us


Chapter: 1

Without reading any other chapters, I say this: this first chapter is basically a lesson in how stupid people aren't; every mistake we make using technology seems to be justified by the stupid mistake of the product's designer.  Apparently, the biggest travesty to modern design are office telephone systems.
However, there is useful knowledge to be gained from the chapter as well, such as the fact that a well designed product has subtle cues for its use, called affordances, constraints, and mappings.  Since I'm just revealing my thoughts on the chapter, i'm not going to explain them (sorry).

Chapter: 2

As I read chapter 2 of this book (and part of the preface), I begin to realize that it's not just the first chapter of the book dedicated to justifying people's ineptitude, but rather, the whole book.  Once more, we look at the ways that designers mess things up, but with new reasons, examples, terms, and therefore, new quiz questions.  Norman suggests that people appear take a perverse pride in technical incompetence, but i'm pretty quick to blame the designer first.

In summary, this chapter makes me feel like a jerk.

Chapter: 3

This chapter has the common theme of "you don't have to know very much to know a lot".  By this, I mean that much of our knowledge for most things is supplemented by cues in the world, or fabrications of our mind.  He talks about the travelling poets who were able to recite long plays from memory, by knowing the important plot points, and filling the rest in as they go along (which still sounds pretty damn hard to me).

Also, this is the chapter where he predicts the advent of smart phones.  You know what i'm talking about, because how could anyone read the lower paragraph on page 74 and not think of a smart phone.

Chapter: 4

This book is so dated, it's as funny as one of Charlie Chaplin's talkies.  I see references to VCRs, land line telephones, and slide projectors.  Anyway, this chapter seemed like a repeat of chapter one, but with extra door-hate, and less phone-hate.  Just like chapter one, chapter four discusses affordances, constraints, and mappings, with an abundance of examples, as always.
He mentions the idea of having microwaves that are able to scan codes from the packaging, and cook them accordingly.  Why is that not a thing already?

Chapter: 5

Oh look, a chapter on erors.  Such a concept is novel to me, but it came with plenty of anecdotes which were more comical than the those from the previous chapters.  He talks about the different kinds of errors that can manifest themselves in our day to day lives, and the probable causes of each.  This part was more enjoyable to me than the next.
And then it starts to talk about how we think and how we organize tasks.  Basically trying to explain the human brain OS.  But really, this section was intended to corroborate the statements about errors made earlier in the chapter.

Chapter: The Longest (Otherwise Known as 6)

A   chapter  about   
                            thechallenges  
              of      good     design   and 
                                                                                 usability.
I forget how long it takes products to finally have a great design, such as the phone and typewriter.  It took each device years to perfect, iterations of attempts out in the market.  However, I was more interested in the in the part about poorly designed design museums.  Isn't it ironic (don't you think)?  The fact that those museums weren't entirely about learning upset me somewhat, and damaged my faith in humanity.
I find it interesting that in the case study for faucets, he pretty much says automatic faucets would be the worst solution to the faucet problem (I don't care for them myself, either).  Yet, I see them in public bathrooms about as much as I see regular faucets.  
But even though I agree with Norman on the faucet issue, he starts talking about featurism as a disease.  Dang, what?  I don't know how things were done back in the 80s, but if I can draw conclusions from Terminator and Back to the Future, then he is sorely mistaken how things will turn out in 20 years.  Most products have a million features, but it's handled by the fact that you can hide and ignore features that you don't care about.
And then we get to move on to computers, something that applies more directly to us than anyone else.  Explicitly says that programmers should not be responsible for the computer's interaction with the user...

       Then why are we taking this class?

No, I think we need to learn how to minimize the list of problems that he describes in the next page.  Implementing ease of use so that it meshes well with the program body can best be done by the person(s) who implemented the body.

Chapter: Se7en

       This chapter is about designing so that the USER is at the center of things (give or take a few spaces).
Tips for how to do this have been mentioned throughout the book, so this chapter contains some repetitions, but several new steps to follow:  Simplify tasks, make things visible, exploit constraints, design for error, or standardize when all else fails.  There's the book in one sentence, I just didn't realize it until 207 pages into it.
Once again, Norman talks about the future, and it makes me laugh.  He predicts the possibility of having unlimited information placed at the user's fingertips, but complains that it would be too difficult to find what you're looking for.

The Book as a Whole

It is not often that I read books of a serious nature cover to cover.  I am generally not interested in the subject matter enough to pursue reading it in my spare time.  Even a novel that I enjoy does not get read within the span of one week. 

However forced the reading may be, this book was fairly entertaining, yet educational.  I definitely would say that the way that I view objects in the world has been changed.  I will likely assume a more critical stance when using a new technology, searching for ways to improve it.  Actually, I sort of already do that, but now the book has given me some methods by which to go about doing this with some discipline. 

I had plenty of laughs just reading any statement made that referenced some point in time, just because the perspective in the book is so outdated.  I mentioned the big ones in my chapter by chapter replay, but I did omit a few.  In the future, he predicts that the typewriter will soon replace the pen and paper.  That ship has already sailed, and was replaced with a rocket ship.  Computers are the new typewriters are the new pen and paper.  Somewhere else, he discusses how impractical it is for us to be using the imperial system of measurement when the metric system is more efficient and logical (I KNOW RIGHT?).  He guesses that we might adopt it within the next few decades.  A pretty optimistic guess, looking back, since we have made nearly no headway converting to the logical system.  Slightly later he mentions microphone keyboards.  I was going to laugh at such a concept but I just remembered that my phone contains such a feature, which I use frequently.  So some of his predictions are actually pretty good, and I won’t legitimately fault him for any incorrect prediction.

In a more substantiative thread, I felt that the book was longer than it needed to be.  Probably 25% of the content was examples.  I don’t want to complain about the examples though, because anyone reading the book purely for entertainment would appreciate the examples, since they were the most fun part to read.  I just was under a little too much time pressure to read the book for purely leisurely purposes.  Aside from the examples, I felt that there was a lot of repetition in the book.  Many concepts covered in chapter one and two were covered again in subsequent chapters, and reading through concepts I already understood was a laborious undertaking.  I might find myself blankly staring at the lines, and then realize that I'd already read past the explanation of the concept I understood, and then have to backtrack to the beginning of the new content.

On the whole, I think it was a very good and enlightening book.  I keep wishing that we had read one of his newer books, but I’ll have to take the word of past students that they are actually no good, or at least, not as good as The Design of Everyday Things.

Ten Terrible or Awesomely Designed Products

This feels like a Cracked.com article already

I have compiled a list of products/objects that I felt were either well designed or poorly designed, using the principles from the book as my gauge.  They are in no particular order, except that good and bad designs alternate.

1.  Childproof Medicine Bottle Lids (Good)

This is a great example of designing something to be intentionally difficult to prevent the wrong people from using it.  It is counter intuitive to push down on a lid to remove it; most children who have used lids before will probably try to pull up on the lid while twisting.

2.  SunbeamToaster Oven (Bad)
"I'm a crappy toaster oven; i burn the outside of your taquitos, but leave them frozen on the inside"

I received this toaster oven for free for helping out at a church garage sale some years ago, because I used my parent's toaster oven as much as the microwave.  The top dial is temperature.  The middle dial is a timer.  The bottom dial is an arbitrary auto timer so that you can determine how dark your toast should be on a 1-7 scale.  It is activated by moving the temperature dial down to the "toast" setting, which will then bypass the timer as well.  The toast setting is activated by the on off switch at the bottom; if the temperature setting is on a number, it is on as long as the timer is greater than 0.  It took me months to become proficient in ignoring the toast setting altogether.

3.  Line 6 Guitar Amplifier
if you were to look closely, you would see that the "suck" dial is turned down, and the "rock" dial is turned up

Aside from being reasonably priced, this amplifier meets a lot of criteria in the book.  It has mappings, where any of the 5 buttons pressed at the top light up to indicate the active mode (although you can hear the difference).  More importantly, that the line-in and line-out sockets are on opposite sides of the amplifier, so that generally, you will not mistake them for one another, although no harm is really done if you plug your guitar into the line out - it just wont make any noise

4.  Certain showers

Pictured above you see my shower.  With its narrow body and overly simple plugging mechanism.  But those things don't bother me because I don't take baths.  Rather, if you look closely, you'll notice that the H is on the left and the C is on the right.  Counter intuitively, the shower is made hotter by rotating the control to the right - in the direction of the C.
I couldn't find the picture of a shower I used in Miami, but it was probably worse. It was designed so that the curtain curved inward with the bath basin which was placed inside a sort of "stage" in the bathroom.  The basin had a lip that prevented water from flowing back into the tub after it got onto the stage, and it looked very nice.  However, water coming from the shower head frequently ended up on the stage, and the longer that the head stayed on, the more that water would build up.  Eventually it would overflow the stage, which wasn't meant for holding water, and it would end up on the bathroom floor, which then flowed into the hallway of hotel room.  That shouldn't be a thing in a 4 star hotel...

5.  Power Smith Power Drill

I think that many drills have the same properties as this one, but this is mine.  It is battery operated, and the battery slot is pretty clearly located; i'm pretty sure you can see it here, and would correctly guess what you had to do to get it out of the slot.  In addition, there is a little flashlight at the end of the drill that lights up when you start to pull the trigger, but before the drill comes on (it also stays on while spinning).  This helps map the trigger function to something happening on the device.

6.  "Modern" Stairs
if this is the stairway to Heaven, God help us all.  But he ain't coming down those stairs to help you, they terrible.

These stairs hurt my ankles to look at.  Each step you take attempts to throw off your balance, and send you back to the ground.  Talk about a harsh penalty for a slip.  But they probably won an award...

7.  Most Batteries

There isn't much to a battery.  The positive and negative ends are fairly well differentiated.  And most people understand which end goes in certain parts of their electronics.  Perhaps there is some room for improvement, such as making the ends different shapes so that they can only possible fit in one orientation.  The other benefit of the design is that it is very difficult to shock yourself using one of these.  You cannot easily complete the circuit using your thumb and forefinger, since the resistance in your hand is too great.  The only feasible way is to put the battery in your mouth.

8.  Misleading doorbell panel


This doorbell has a sign underneath it labeled "BELL", with an arrow pointing to the left.  Those using it reported looking to the left of the panel for a doorbell before realizing that the arrow had nothing to do with the sign.
What was the arrow for then?  It pointed at the door.


9.  AutoCAD
I don't know why the guy has like 12411 buttons active on his screen though

AutoCAD is a program that is used to create drawings of 2D or 3D objects.  I rarely mess with the 3D side of it, but I do use it professionally, and I can say that it is a very good program, considering the number of features it has.  Each time you try to take an action, it will prompt you in the command window at the bottom to supply the correct arguments, one at a time, or none.
Maybe you notice that there is an excess of buttons on the screen, but those buttons are fully customizable, so that you can have as few buttons as you like.  In addition, each command can be typed if you prefer, and if you want to do something, but aren't sure if it's a feature, just try typing it out (that's how I found the area of an irregular spline).  

10.  SolidWorks

Like AutoCAD, SolidWorks is another drafting program for computers.  However, SolidWorks is more tailored to 3D applications.  To be fair, I won't complain about making anything 3D in this program, the 2D aspects are troubling enough.  To draw a shape, you must first draw out the shape that you want it to be, and then go back and specify the actual dimensions that the shape should have.  This always throws off my flow, as I prefer to set the length of an object, and then see a line of that length get drawn.  I spent a semester using this for a project before I started using AutoCAD consistently, and, while I was always able to complete my design, I was always frustrated after finishing.

Thursday, September 13, 2012


An Evaluation of the Chinese Room
A psychologist’s attempt to explain computers

In this article, John R. Searle of Berkeley’s Department of philosophy attempts to discourage the idea that a computer could ever “think”.  That is, he is arguing against the notion of strong AI, which I believe would do more for psychology than 100 years of human study (incidentally, it would probably require 100 years of human study to create a strong AI). 

The test used to disprove strong AI is this:  You are locked in a room with a large batch of Chinese writing, which cannot understand at all.  You are then given another batch of symbols with instructions on how the two batches correlate.  A third batch is then given to you, along with instructions to give back symbols from the first two batches based on symbols from the third batch.  Now say that you get so good at doing this, that you can reply to any combination of symbols with the proper Chinese characters so that no Chinese person in another room would be able to tell that you didn’t speak a word of Chinese just by asking you questions (a version of the Turing test).  But the fact remains that you do not at all understand Chinese.
Searle tries to prove his point by oversimplifying it to the point that anyone could see that he was correct in his example.  He is practically describing an encryptor that converts Chinese characters to other Chinese characters, instead of English characters to other English characters (that would make no sense to us).  No one argued that an encryptor understood English, but that’s essentially what he’s arguing. 

He addresses the “Systems” reply, first by calling it embarrassing, but more irritating is the claim he makes at the end of this reply.  He quotes a man from 1979 who says machines as simple as a thermostat can have beliefs.  Now, I do not believe at all that a thermostat has a belief in the literal sense, but Searle uses the absurdity of the statement in his argument.  He actually says “One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously, and they don't think anyone else will either. I propose for a moment at least, to take it seriously.”  Why??  It’s just poor debate practice.

My favorite argument is the brain simulator reply, “what if we write a program to simulate the synapses and neural firing of a human brain?”  This sounds legit to me, but Searle breaks it down into a man operating valves and pipes in such a way to mimic neural firings, based on instructions he’s been given to output Chinese answers.  The pipes and man still have no understanding of Chinese.  Now I just want to say that synapse firing in the brain is all chemistry and physics, which are the instructions of the universe.  But…that means Searle proves another valuable point:

Humans Can’t Understand


I’m pretty much out of space, but I still want to list the biggest issue with even arguing AI ever.  He didn’t define his terms.  At no point does he say what it even means to understand something; he didn’t list what qualifies as a belief.
In addition, 60% of the way through the paper, he basically recants everything he said for the first 10 pages.  He acknowledges that an exact artificial replica of a human would be able to think, and that a program could think, because minds are programs, but a program within a computer could not think.  He disagrees with himself, so I really can’t be swayed by his argument.

Tuesday, September 11, 2012

Reading #6:  Not Doing but Thinking: the Role of Challenge in the Gaming Experience

do I write about anything besides games?


Introduction:

At last, I present the final of the required blog posts...for assignment one.  It was written by four researchers at both University College London, and the University of York.  They are:

Anna L Cox, UCL Interaction Centre, University College London.  http://www.uclic.ucl.ac.uk/people/a.cox/people.htm
Paul Cairns, Dept of CS, University of York.  http://www-users.cs.york.ac.uk/~pcairns/

These two people appear to be undergrad students, because they have no personal webpage:
Pari Shah, Psychology & Language Sciences, University College London.
Michael Carroll, Dept of CS, University of York.

Summary

Gaming is arguably one of the most successful applications of computing.  Immersion is one of the key aspects of video game design.  It is directly responsible for making the user enjoy the game by allowing them to become cognitively unaware of their surroundings.  This paper revolved around how to maximize that immersion, so as to improve the gaming experience (GX).

An example of a tower defense game, used in the study

Related Work

Each of these papers highlight studies that explore how to best utilize whatever application they may choose, much like this one.

Optimal Experience of Web Activities.  This discusses the best way for the user to experience the internet via web browser:  http://www.sciencedirect.com/science/article/pii/S0747563299000382

Video Games and the Future of Learning.  I actually worked for a game developer who created games intended to help people learn.  This explores the possibility of teaching students via game, so that we can better hold their interest:  http://gise.rice.edu/documents/FutureOfLearning.pdf

From Content to Context:  Video games as designed experience.  This paper combines the idea of teaching with video games with increasing the teaching effectiveness by increasing immersion:  http://edr.sagepub.com/content/35/8/19.short

Understanding Online Gaming Addiction and Treatment Issues for Adolescents.  This paper deals with the issue of too much immersion in a game, whenever it starts to pull reality out from under players: http://www.tandfonline.com/doi/abs/10.1080/01926180902942191

Social Software: Fun and Games, or Business Tools? Discusses the potential of web software in the new age of "web 2.0":  http://jis.sagepub.com/content/34/4/591.short
 
A Grounded Investigation of Game Immersion:  Studies three varying degrees of game development, much like this paper does:  http://dl.acm.org/citation.cfm?id=986048

Behavior, Realism, and Immersion in Games.  This paper sets out to clearly define the term immersion, and what affects it: http://dl.acm.org/citation.cfm?id=1056894

Measuring and defining the experience of immersion in games: Just like the previous paper, this one also attempts to better define immersion, since gamers and reviewers seem to have different definitions:  http://www.sciencedirect.com/science/article/pii/S1071581908000499

I wish I were a warrior: The role of wishful identification in the effects of violent video games on aggression in adolescent boys. This study tested the levels of aggression in young poorly educated boys.  The results said that the more aggressive boys related to video game characters, especially when they were well immersed in the game:  http://psycnet.apa.org/journals/dev/43/4/1038/

Evaluation

There were three experiments performed on subjects.  

The first tested how physical effort affects immersion. This was done using a tower defense game.  The researchers used an objective quantitative measure for this, because it simply counted how many "creeps" were destroyed before the player was completely overwhelmed.  There was also a Qualitative survey at the end of the test, asking how the players felt during each game.

The second test measured the effect of time constraint.  This was done by having players play a Bejeweled game, half playing in a timed mode, the other half playing with unlimited time.  This time, the quantitative value was how many matches the player could make before their timer ran out, or if they were playing an untimed mode, how many sets before there were no possible moves left.  Once again, they were required to take a qualitative survey after the games.

The third and final test measured the cognitive difficulty of the game.  This was accomplished by having the subjects play games of Tetris at varying difficulties.  The objective quantitative value was how many rows the player could create before their screen was overflowed.  The subjective quantitative value was a personal rating of how good at the game the player felt that they were, and then a numerical rating of how difficult they thought the game was.

Overall, the researchers found that the key to immersion is to have it so that the game's difficulty level best matches the player's skill.

Discussion

I know firsthand the power of immersion over a player.  As far as game design goes, this is a very important study to do.  However, due to its importance, these studies have been done many times before, so this one is not particularly novel.

Thursday, September 6, 2012

Reading #5:  A twin strike of short gaming articles

Part 1:  Experimental Investigation of Human Adaptation to Change in Agent’s Strategy through a Competitive Two-Player Game

Introduction

This paper was written by researchers in Japan.  After reading the article, I found out that the primary researchers' webpages are in Japanese, so I can't really make anything of it...

Seiji Yamada, National Institute of Informatics:  http://www.nii.ac.jp/en/faculty/digital_content/yamada_seiji/
Akira Ito, Gifu University, is an English student, so his page is legible.  However, he seems to be an undergrad, so there isn't really anything to write:  http://www.otago.ac.nz/profiles/otago000603.html

Summary

There is no denying that all humans adapt when competing with another individual.  This paper was simply a study to determine how human adaptation was affected when competing with humans vs. competing with computers.  

Related Work

I will do five pieces of related work here because a.) i'm doing two articles, and b.) the paper states that not very much research has been done in this field:

A paper on humans and game strategy: http://www.jstor.org/stable/10.2307/2098880
On changing strategy in a game:  http://www.sciencedirect.com/science/article/pii/S0899825685710305
paper highlighting the importance of intelligent AI in games:  http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1558
About changing strategy in games where not all information is known (such as the game in this study):  http://www.jstor.org/stable/10.2307/3689430
Changing AI strategy in games that are more complex than the one in this study:  http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA385122

These articles largely deal with strategy in games much like this one.  This paper is different in that it compares strategies that people form vs humans or machines.

Evaluation

The researchers formed two hypotheses:
     1.  An adaptation phase exists when a human is confronted with a change in the opponent’s strategy.
     2: Adaptation is faster when a human is competing with a robot than with a human.
These hypotheses were tested by having the subjects play a penny matching game vs a computer.  Half the players were told it was a computer, and the other half were told they were playing vs a human.  For the penny game, each side would choose heads or tails on a penny.  If the pennies matched, player A received both pennies, if they differed, player B got them both.  Ten games were played, with six rounds each.  In the 6th round of each game, the payout was 20x.  The computer used the exact same strategy vs all players, with a permanent change in strategy in the 4th round.  Below shows the percentage of wins in the sixth round.


The researchers' hypotheses were accurate, with one additional fact: in the 7th game, those playing against a human appeared to expect another change in strategy, when there wasn't one.

Discussion

This study helps AI designers to understand people's perception of computer opponents.
The paper acts like this is a novel study, but this seems significant enough to have been done before.  By significant, I mean important enough for game creators to look into.

Part Two: Through the Azerothian Looking Glass: 

Mapping In-Game Preferences to Real World Demographics

A paper about World of Warcraft


Introduction

This paper wasn't so much about CHI I think as it was just a census of World of Warcraft.  Regardless, useful information can be gained from such studies.  The authors are:  

Nick Yee, Palo Alto Research Center, has recently moved to Ubisoft, and is now studying gamer behavior.  Appropriate, given the nature of this paper.  http://www.nickyee.com/
Nicolas Ducheneaut, Palo Alto Research Center, has also moved to Ubisoft, and works with Nick Yee doing the same research.  http://www.linkedin.com/in/ducheneaut
Han-Tai Shiao, University of Minnesota, Twin Cities, is a graduate student working for the department of electrical and computer engineering.  http://www.umn.edu/lookup?SET_INSTITUTION=UMNTC&UID =shiao003
Les Nelson, Palo Alto Research Center, has worked on a number of inventions with Xerox.  His work has led to 2 products, 20 patents, and several publications in HCI.  http://www.parc.com/about/people/136/les-nelson.html

Summary

Massively multiplayer online role playing games (MMORPGs) are a huge source of data (among other things).  The purpose of this paper was to conduct a large survey/census, which would allow conclusions about the playing habits of certain demographics.  Ultimately, it will reveal what makes WoW such a popular game.



Related Work

These five articles are similar to this article because they deal with people in MMOs, and in one case, using MMO demographic information in games (EXACTLY like this article)

A study to see why people enjoy playing online games more than others:  http://online.liebertpub.com/doi/abs/10.1089/cpb.2006.9.772
A paper written by Nicolas Ducheneaut that seeks to explain the social behaviors of people who play MMORPGs. http://www.springerlink.com/content/n00107m734617388/
A report on the potential of using MMORPGs for demographic purposes:  http://www.mitpressjournals.org/doi/abs/10.1162/pres.15.3.309
Paper about the type of people who generally like video games.  This is a much more general study than the one conducted in this paper:  http://www.amsciepub.com/doi/abs/10.2466/pr0.1984.55.1.271

Evaluation

The study called for 1000 volunteers to provide demographic information, and allow the researchers to collect character information for each person.  Over the course of six months, the researchers ran a script that checked if each character was online.  This information, combined with the given demographic data and publicly available character data allowed the researchers to draw many conclusions.  They found that age, gender, work schedule, and marital status all had impacts on people's playing habits, and that different ages and genders preferred different activities.  All information was objective and quantitative, but the information inferred from it is objective qualitative.

Discussion

The only plausible use I could foresee with this information is how to create a better game that appeals to a wider demographic.  This isn't particularly useful information for anything else, as far as I can tell.
Honestly, I'm not sure that this paper belonged at a CHI conference, whether it was interesting or not.

Reading #4:  Measuring Users’ Experience of Agency in their own Actions

A study of how it feels to be a button


Introduction

This is my first blog about a paper where research was performed just for the sake of knowledge.  Every other time, there was some sort of product that they were designing and testing.  In this case, the study sought to explain how responsible humans felt for their actions in various scenarios, and then how a computer could help without removing that feeling of responsibility.  The researchers were:

David Coyle, Cambridge University, is a Marie Curie post-doctoral research fellow who's research tends to be a blend of computer and neuroscience.  http://www.neuroscience.cam.ac.uk/directory/profile.php?david.coyle
James Moore, University of London, works in the department of psychology as a lecturer.  His research involves the study of agency, and disorders that remove that feeling of agency.  http://www.gold.ac.uk/psychology/staff/academicstaff/moore/
Per Ola Kristensson, University of St Andrews, is a lecturer in HCI in the department of computer science.  His research intersects HCI and AI.  http://sachi.cs.st-andrews.ac.uk/people/faculty/per-ola-kristensson/
Paul C. Fletcher, BCNI, is a researcher who primarily deals with understanding psychosis.  I see no indication that he would have even been interesting in this study...http://www.neuroscience.cam.ac.uk/directory/profile.php?pcf22
Alan F. Blackwell, University of Cambridge, is a researcher who works in the Computer Laboratory at Cambridge, and whose website is fairly difficult to navigate.  http://www.cl.cam.ac.uk/~afb21/

Summary

Agency is the feeling of responsibility a person has for their actions.  Agency is important because it controls how a person feels about the existence of their own free will.  The study sought to test how that sense of agency is affected in two cases:
1.  When the user treats their skin as the user interface, instead of a touch screen.
2.  When a computer program helps the user to interact with objects with a mouse on a normal computer screen

Related Work

Paper about how changes in the workstation a person is used to hampers their sense of agency in their work:  http://www.jstor.org/stable/10.2307/248811
A study on how humans feel about and interact with machines:  http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.8074&rep=rep1&type=pdf
Kind of the basis of this research; can a program be made to effectively reduce the workload for a user:  http://portal.surrey.ac.uk/pls/portal/docs/PAGE/COMPUTING/PEOPLE/N.ANTONOPOULOS/TEACHING/CSM13%20SOFTWARE%20AGENTS/CSM13%20COURSEWORK%202006/PAPER2.PDF
The next step after this paper; how can we effectively utilize this knowledge to make an interface agent work well (book):  http://books.google.com/books?hl=en&lr=&id=TNl1Eb4LgmsC&oi=fnd&pg=PA111&dq
=Autonomous+Interface+Agents&ots=YtfyGPGa4s&sig=MrHIuPbVxYIk-gh76_b_V_mIB3o#v=onepage&q=Autonomous%20Interface%20Agents&f=false
Discusses how a sense of "presence" affects immersion and agency within a program: http://www.mitpressjournals.org/doi/abs/10.1162/105474601300343603
Explains the concept of agency:  http://www.sciencedirect.com/science/article/pii/S1053810003000527
More on agency and its importance:  http://www.sciencedirect.com/science/article/pii/S0010027708002382
How agency is influenced by external sources:  http://www.sciencedirect.com/science/article/pii/S1053810009001020
About the concept of will:  http://www.sciencedirect.com/science/article/pii/S1364661303000020
How computers have affected the agency of humans:  http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.201.5998&rep=rep1&type=pdf

Evaluation

The first test in this paper had half the users tap on their skin, and then estimate how long it took for a light to turn on after tapping the "skin button" on their wrist.  This is called "binding time".  The other half pressed a physical button and made a similar estimate.  The actual time it took was measured electronically, so that they would have an objective time.  The purpose of this subjective quantitative data is to determine a user's agency.  If binding time is greater, then they experienced greater agency than if the binding time was less.  Generally, users felt greater agency when tapping on their wrists.

The second test had users click on a button on a computer screen and again estimate the binding time.  The four groups had varying amounts of computer help (by giving the button "gravity"), from none to ridiculous amounts of help.  The results showed that users felt greater agency with a small amount of computer assistance, which sharply decreased as computer assistance increased.

Discussion

The stated effects of this study are to provide designers with new ways of refining interaction techniques and interfaces so that users experience an increased sense of control over their actions.
In short, the study sought to have user interfaces help users interact in such a way that they do not realize they are being helped, so that they still feel completely in control.  This seems familiar...
Oh, right

Tuesday, September 4, 2012

Reading #3:  Proton: Multitouch Gestures as Regular Expressions

The world needs more multi touch interfaces...


Introduction

This paper discusses the interesting concept of breaking touch-based interactions into regular expressions so that it can be easily checked for conflicting gestures.  It was written by researches at the University of California, Berkeley in conjunction with those at Pixar Animation Studios.  They are:

Kenrick Kin is a PhD candidate at Berkeley who works primarily with multi touch interfaces.  His webpage is found at http://www.cs.berkeley.edu/~kenrick/
Bjorn Hartman is a co-director at the Berkeley Institute of Design and the Swarm Lab.  His research involves multi-user computing, and the systems research that is necessary to make it possible.  http://www.cs.berkeley.edu/~bjoern/
Tony DeRose is a senior scientist and lead of the Pixar Research Group.  Aside from this project, his recent research involves making math and science more inspiring for middle and high school students.  http://graphics.pixar.com/people/derose/index.html
Maneesh Agrawala is Kenrick's advisor for this project and an associate professor in electrical engineering and computer science at Berkeley.  His focus is on how cognitive design principles can be used to improve the effectiveness of visual displays.  http://vis.berkeley.edu/~maneesh/

Summary

Generally, when developers create multi touch applications, they design and implement custom gestures from scratch.  Like mouse based frameworks, multi touch frameworks use lowest level events and have those register callbacks.  According to this paper, multi touch recognition code is spread across the source of these applications, and they ca also conflict with previously defined gestures.  The Proton framework attempts to consolidate the gesture recognition code into one place, and break the defined gestures into regular expressions, ensuring that similar gestures do not conflict.  

Gestures comparison works much like a finite-state machine.  With each motion (RegEx symbol) that is performed, the number of possible gestures (expressions) diminishes.

Related Work

I'm guessing that there is a large amount of research performed on this subject, given the fact that my paper cites 44 references.  However, the researchers DID miss a few of them, so here they are:

GeForMTjs:  A javascript library that represents gestures in an abstract way: http://www.springerlink.com/content/c416331055107117/fulltext.pdf
An extension of the Proton Framework, the Proton++ Framework:  http://kneecap.cs.berkeley.edu/papers/protonPlusPlus/protonPlusPlus-UIST2012.pdf

These researchers essentially left no paper unreferenced.

Evaluation

At the moment, the gesture matcher can only handle users performing one gesture at a time.  To handle otherwise would require an FSM that would be exponentially larger for each gesture defined.  
The paper did not report any user feedback, despite the fact that they wrote three example applications for testing.  The team plans to perform a more extensive study later.
The evaluation that they did perform was all objective.  Quantitative data gathered reports that the algorithm required 22ms to match from a set of 36 gestures using a 2.2 GHz 2 core Intel processor.  Qualitative data indicates that the program could potentially be sped up by optimizing it, and by computing the FSM before the gestures is made.

Discussion

This work definitely has practical use in the world, and as far as I know, it is a novel concept.  Even though I never thoroughly learned about using regular expressions, this paper was comprehensible to me.  I could foresee multi touch application developers making use of this framework, after it has been optimized.

Reading #2:  Protecting Artificial Team-Mates:  More Seems Like Less

Introduction

When browsing the CHI papers list, I was intrigued to see a section called "Understanding gamers".  I am nothing, if not a gamer (aside from programmer, musician, fencer), so I chose the first one in the list of papers for this topic that seemed applicable to me.  The first paper in the list met that criteria, and it was written by researchers at the National University of Singapore.

Tim Merritt was a PhD student, whose research generally consists of trying to create interfaces that enable better communication.  He now works at Aarhus School of Architecture in Denmark as an assistant professor.

Kevin McGee is an associate professor in the department of communications and new media.  His work generally revolves around end-user programming, artificial intelligence, and cognitive science.

Summary

This paper discusses how human players treat their team mates differently depending on if they are human or computer.  Previous studies indicated that cooperating humans increased enjoyment because of the presence of social cues, while competing with humans caused increased immersion in the game due to a greater sense of social presence.  The paper cites a study that had a human player play a cooperative game with an AI.  The player reported frustration with the AI, and unfairly blamed it for team mistakes.  Afterwards, the human was paired with the same AI, but told that they were playing with a human teammate this time.  The player reported feelings of better teamwork, and felt that the teammate was taking more risks on behalf of the team.

Using public bathrooms is always a risk
The study had a person work with two AI teammates for a cooperative game.  The player, however, was told that one of the teammates was a human, and that the other was an AI.  The three players were supposed to work together to "capture" a gunner without getting shot.  The gunner was also an AI, and it was captured by being tagged in the back.  Each teammate could attract the attention of the gunner by "yelling" at it, which increased that chance that it would turn to face that character.  Each time the human player yelled, whichever target the gunner was facing would mark that the player saved them.  

Related Work

The research that relates to this topic can be directly related to how humans interact with artificial intelligence, and there are plenty of papers that cover such a topic:

Human- centered design in teammates for aviation:  https://www.aaai.org/Papers/FLAIRS/2003/Flairs03-011.pdf
Paper discussing how to make robots able to work with humans:  https://willowgarage.com/sites/default/files/TakayamaRSSworkshop.pdf
Paper written by the same authors expanding on the subject:  http://scholarbank.nus.sg/handle/10635/33276
Paper discussing how realistic AI opponents increase enjoyability of a game:  http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.173.6725&rep=rep1&type=pdf
Paper written by one of the author's of this paper about AI that more effectively works as a teammate:  http://dl.acm.org/citation.cfm?id=1822365
Another paper by Merritt discussing AI teammates as a scapegoat:  http://dl.acm.org/citation.cfm?id=1958945
Discusses how people can feel a "social presence" based on whether or not they consider an avatar to be a person or not:  http://www.mitpressjournals.org/doi/abs/10.1162/105474603322761289
The original HCI paper, in which Alan Turing discusses if a person can consider a machine to be human:  http://www.loebner.net/Prizef/TuringArticle.html
About whether or not computers can be given personalities to help human's perception of them:  http://delivery.acm.org/10.1145/230000/223538/p228-nass.pdf?ip=128.194.131.80&acc=ACTIVE%20SERVICE&CFID= 112256229& CFTOKEN=80613711&__acm__=1346965032_28c0d2e177550cce4c53cd353531fbdf
About the uncanny valley: http://www.coli.uni-saarland.de/courses/agentinteraction/contents/papers/MacDorman06short.pdf

Each of these topics are directly related to this paper because they all involve how a human perceives a computer personality, and how that person's behavior is affected.

Evaluation

After the game, the researchers obtained qualitative subjective information regarding the player's perceived interaction with the AI.  Each player was interviewed and reported that they yelled more for the human player than for the computer player.

Why, that's computer discrimination!
Quantitative objective data gathered during the game indicates the real twist:  This is the OPPOSITE of the actual results;  the player yelled for the perceived computer player about 30% more than for the perceived human player.

That's...reverse discrimination...
Explanations for this have to do with brain imaging studies, which suggest that altruism feels more emotionally fulfilling for humans than for AI, so it was probably weighted more in the players' memories.

Discussion

While interesting, this study had little practical purpose, aside from answering questions about how people subconsciously felt about computers.  The study is not particularly novel, but it was an interesting read nonetheless.

Sunday, September 2, 2012

Paper Reading #1 (as in the one I have to present):  ZeroN:  Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation


Intro

The paper that I consented to be assigned is called "ZeroN:  Mid-Air Tangible Interaction Enabled by Computer Controlled Magnetic Levitation".  But you probably knew that from the blog heading.  It was written by researches in the MIT Media Laboratory and MIT Center for Bits and Atoms.  The researchers are as follows:

Jinha Lee, MIT Media Laboratory is a Ph.D student and research assistant at Tangible Media Group.  His other works are a See-through 3D desktop, and "Beyond" - a collapsible input device.

Rehmi Post, MIT  Center for Bits and Atoms, is a visiting scientist in the Physics and Media group, directed by Neil Gershenfeld.  Some of his other work includes multi-touch displays and wearable interfaces.

Hiroshi Ishii, MIT Media Laboratory, is Associate Director at the Media Lab, and founded the Tangible Media Group.  His other work includes TeamWorkStation and ClearBoard.

Summary

A tangible interface attempts to render virtual objects in the physical world.  The problem with existing tangible interfaces is that they are often restricted to two dimensional space, due to gravity, or that their lateral range of motion is quite limited.  The goal of the ZeroN project was to create a new 3D tangible interface that represent objects in seemingly anti-gravitational motion.
This was achieved by using a single powerful solenoid electromagnet that was suspended several inches above the table.  ZeroN is the name of the magnetic ball that this solenoid levitates beneath it.  The solenoid holds ZeroN in place by pulsing rapid magnetic fields, which continually pull ZeroN up when active, and let it fall when inactive.  The whole system is moved laterally by an x and z axis motor based on user input to maintain that the solenoid remains over ZeroN.  The location of ZeroN is tracked by two PlayStation Eyecams.  To ensure that ZeroN doesn't drift off on its own, the user's hands are detected by an Xbox kinect sensor.


There is added functionality for the system that allows images to be projected onto ZeroN, such as a planet's surface or a camera.  There is also playback capability, where the user must hold ZeroN for 2.5 seconds, and then move it along the path they want ZeroN to follow.  Upon release, ZeroN will retreat to its original position, and then follow the path that the user created.
ZeroN introduces a novel interaction language: (a) Users places ZeroN in the air; (b) the computer actuates ZeroN and users intervene with the movement of ZeroN; (c) digital item attached to ZeroN; (d) ZeroN translated and rotated in the air; (e) long hold used to record and play back.

Related Work (not referenced in the paper)

The concept of tangible user interfaces must not be thoroughly explored, because nearly all of the papers listed below reference at least one paper that has also been referenced in the article I am writing about.
These four projects had a practical application, and that was usually to help industries engineer their product, or the process to create a product:
Augmented Urban Planning Workbench:  http://web.mit.edu/ebj/www/ISMAR02paperFinal.pdf
InfrActables:  http://dl.acm.org/citation.cfm?id=1130237.1130444
Tangible Factory Planning: http://www.danielguse.de/tangibletable.php
Tangible User Interface For Supporting Disaster Education:  http://dl.acm.org/citation.cfm?id=1280877

These other projects had a much more novel focus, be it drawing, or music production.  Actually, a good number of tangible user interfaces were aimed at dynamic music production.
MightyTrace:  http://dl.acm.org/citation.cfm?id=1357091&dl=ACM&coll=DL&CFID=151312215&CFTOKEN=26582949
instant city:  http://www.instantcity.ch/d/projekt/einleitung.htm

Evaluation

The researchers for this project conducted their evaluation both by objectively quantifying the limitations of the system as well as by having users test and give subjective feedback.
Their quantified evaluation described the whole system, such as limitations on how far the ZeroN may suspend from the solenoid, how fast ZeroN may move before it is dropped, or the maximum angle it may be it before the magnetic flux is too weak and the ball is dropped.  These limitations are not due to power restrictions for the magnet, but rather heat restrictions.  Too much current through the solenoid makes it start to melt.  In addition, the system only supports one ball at the moment.  These were simply gained from objective measurements.
The researchers obtained qualitative information for several different applications that used ZeroN, such as an architectural modeling program, a program to illustrate Kepler's law, and a trippy magnetic ping-pong game.  The most pressing issue was that there was a noticeable latency between the user setting ZeroN's position and the magnet refreshing to keep it there.  This caused confusion.

One-sided Discussion

From what I've read, this is the first time anyone has tried to create a magnetically suspended user interface.  The potential is there, but at the moment, the system needs to be expanded to include multiple ZeroN balls, and possibly decrease the variability in the position of ZeroN, to give it a more smooth feel.