MobileHCI and designing for uncertainty
I attended MobileHCI over the weekend and through this afternoon. It was great to catch up with a bunch of folks that I haven’t seen in awhile, and the banquet at the Exploratorium was a blast (I love playing with their exhibits). But I found the content a bit underwhelming. While there were papers I liked, overall I had two issues with much of the work.
One, the research community seems to be confusing statistical significant with significance. Yes, it’s splendid that your new interaction technique is better than existing techniques with p < 0.05. But I hate to break it to the community: the fat finger / occlusion / offscreen information / etc. “problem” isn’t exactly a showstopper for mobile device users, and even if it were a 2 second improvement in task performance won’t Change The World. Microlearning on mobile devices? I can get behind that as tackling a significant problem. Interacting on the back of a phone to avoid your finger getting in the way of what you’re doing? Not so much. But it’s easier to measure impact when you’re tackling small problems, so there was a lot of “small ball”.
Two, there was a lot of work proposing to replace supposedly problematic deterministic techniques with probabilistic techniques (e.g., recognizing position, gestures, orientation, location, etc.). In theory that may sound great: we’ll go beyond old, deterministic 2D techniques like tapping the screen and instead unlock the power of the sensors contained in mobile devices. And when they work the proposed new hotness techniques are better than the old and busted existing techniques. But it’s that caveat that’s the problem: because the new hotness techniques are based on noisy sensors or probabilistic recognition algorithms, they don’t always work. Multiple papers admitted that their proposed approach was either no better than existing approaches because of noisy data or actually fared worse. Other papers substituted deterministic sensors for non-deterministic sensors to try to avoid the problem altogether, but if your solution requires instrumenting the space rather than the phone then it’s not really a mobile solution.
I fully believe that we’re in the midst of a transition from deterministic interaction techniques and algorithms to more probabilistic, non-determine approaches. But rather than pretending that non-determinism doesn’t exist, we need to tackle it head on and think about how well our designs will work when they’re wrong multiple times a day. As a simple example, machine learning researchers are often happy with 85% accuracy rates, but from the user’s perspective that means a system is wrong 1 time out of 6. If it’s a system users employ often, that adds up to a lot of errors. We need more thinking about how to design user experiences that embrace uncertainty and still deliver value to users.