Skip to content

Our definition of augmentation is too narrow

I’m pretty down on the current, through-the-lens version of augmented reality. The need to set up a new spatial map each time limits sustained interactions, and most prominent use cases aren’t actually that common. Interior decorating! Measuring things! Leaving random notes floating around the environment!

But part of the issues is that people are settling for a very narrow definition of augmented reality. We already have AR technologies that work great. Google Maps, as one example, is a great example of an app that augments your current reality. It’ll provide you with information about what’s around you, and if you want to go somewhere else it’ll help you get there. That’s a really useful augmentation. But it’s no longer novel, so we take it for granted and don’t regard it as that neat.

Through-the-lens augmentation is what people typically think of when they hear AR. That requires (in various incarnations) identifying objects in front of you, understanding the spatial arrangement of those objects (plus possibly the broader space), and determining how to render content to appear to blend in with that space. Neat when it works, but there are a host of technical challenges that haven’t really been solved. Spatial understanding sort of works in certain situations for relatively simple environments. Object recognition sort of works for some objects, particularly visually distinctive ones, ideally with some text on them. Think books and movie posters. Otherwise it’s kind of lousy. Identifying that something is a chair isn’t necessarily that useful. What kind of chair is it? An Eames chair? An Aeron chair? Before you can augment an object, you need to know more than just it’s general type.

I’m actually all for augmenting users’ realities. I just think we should be aiming to build more general augmentations rather than getting too caught up in flashy through-the-lens demos. Useful augmentations, not just neat augmentations.

So how’s that AR transformation going?

Before the public release of ARKit with iOS 11 there was lots of breathless speculation about how AR on our phones was going to fundamentally transform how with interacted with information. Never mind that most of the public demos centered around interior decorating, measuring things, and games. Surely, people insisted, there were other amazing use cases that would catch on or, perhaps, there was a large, unmet need for better interior decorating support.

Fast forward a few months after ARKit’s release, and how often do you hear people mention ARKit now? If you’re an iOS user, when was the last time you used an AR app? The week iOS 11 was released?

I regard much of the buzz around AR as a fundamental failure to distinguish between neat and useful. Yea, AR is neat. No, there just isn’t that much need for interior decorating support.  And in fact, unless you’re actively engaged in motorcycle design or architecture (two common AR-for-work examples), most of the things you do are probably 2D tasks, not 3D tasks, and so are unlikely to benefit from accurate spatial perception and blending real and virtual 3D content. But hey, AR is neat.

I don’t miss finals

My daughter started high school this year, and she had her first series of finals this month. It took me back to high school and college and reminded me how much I don’t miss final exams. Oh, I was always good at them. But looking back, they were (and are) a pretty lousy way of measuring mastery of a subject. I aced finals in classes where I’d now struggle to remember any of the material. It was a great day in grad school where I reached the point where I’d never have to take another final again.

How not to implement activity encouragement

I’ve mentioned elsewhere that I split my time between the iOS and Android ecosystems to keep current on both. That includes watches: I wear an Apple Watch (Nike+ series 2) when carrying my iPhone, and a Gear S3 when carrying my Note. When running I prefer to wear my Apple watch: it’s lighter, and I don’t mind sweating over it or its band (although I prefer the Gear S3 when I want to automatically track my activity, rather than manually).

In the latest version of watchOS, Apple added activity encouragements. At the start of the day, it’ll encourage you to keep closing your rings, or, if you’ve fallen behind your goals for a day or two, raise your game a bit. Normally I don’t mind, although the encouragements have zero impact on my actual activity (most of my exercise comes from running and biking to/from work).

What does drive me crazy, however, is when the watch tells me to raise my game because I had no activity yesterday and I wasn’t wearing the watch. There’s a big difference between activity data showing no activity and a total lack of activity data. And yet the watchOS developers were too lazy (too rushed? Too indifferent?) to bother to distinguish between the two. Regardless of the reason, it turns what’s a fairly harmless feature (encouraging you to keep fit) into something that just makes the watch look dumb. The watch really has no idea what I did on a day with no activity data; I could have run a marathon for all it knows. Encouraging me to be more active when it has no idea what I did just makes it (and Apple) look dumb.

So do yourself a favor: if you’re building an app that encourages users to engage in some activity, make sure you differentiate between an absence of that activity and an absence of data about that activity. There’s a crucial difference.

I occasionally miss cold weather

I’ve lived in the Bay Area for over 10 years now. Overall the weather here is great. I can bike to work every day between roughly May and October and not worry about getting rained on. Even in the winter, the “rainy season”, it only really rains occasionally and I can still bike to work most days. And I can keep running year round, nearly always in shorts, although in winter I often have to break out the long sleeves for evening runs.

But I do occasionally miss cold weather. I partly blame all the clothing catalogs that show up in the mail near the holidays. They’re full of sweaters and fleeces that look seriously warm and cozy. Except they’re designed for people who live places where 30 is the high. Here we do occasionally get close to freezing at night, but we’re usually right back up in the sixties during the day (it’s 67 as I write this).

I occasionally miss snow as well. When it’s falling and/or freshly fallen, snowing is beautiful. Of course, after it’s thawed a bit and then refrozen, and then thawed a bit and refrozen, etc.: not quite so much. And when it picks up sand and dirt from the roads, it loses much of its aesthetic appeal. And here we can always drive to Tahoe or other places in the Sierras if we really want snow.

But I do occasionally miss cold weather.

Taking notes with the Tab S3 vs. iPad + Apple Pencil

Before leaving Samsung, I used my employee discount one last time (in combination with a Thanksgiving sale) to get a Tab S3 to replace my old Tab S. I intended to experiment with using it to take notes and sketch out ideas at work to see how it felt compared to using an Apple Pencil with an iPad.

I’ve been using the Tab S3 for a bit over a week at work now, and so far I really like it. The Apple Pencil does feel a bit better in your hand (it’s slightly heftier, and I prefer the rounded pencil to the squarish S3 stylus), but the S3 stylus feels better when writing. It’s hard to describe exactly, but the stylus is slightly softer, so you get just a bit more friction than you get with the Pencil. It feels more like actually writing, while the Pencil feels more like sliding hard plastic on glass.

I also like that the stylus is passively sensed, so you don’t have to worry about charging it (and let’s face it, the Pencil’s charging solution just looks goofy). And despite being passively sensed, I haven’t noticed any difference in input latency with the stylus versus the Pencil.

From a software standpoint, I’m not thrilled with either first party solution. I like that Samsung Notes lets you use the button on the stylus to quickly erase, but I’m not thrilled with its approach to organizing notes (basically you get folders where each note-taking session as a file). Apple’s Notes is similarly limited organizationally, and since the Pencil has no button you get no quick erase toggle.

I experimented with Google’s Keep, but it really doesn’t support stylus input that well. Notes can have drawings, but each drawing is a separate full page document, so it gets really awkward if you’re taking lots of notes.

For now I’ve settled on using OneNote. You get notebooks, sections, and pages, each page can grow to be as big as you want. The only thing I don’t like about it is that there seems to be no way to assign the stylus button to erase; you have to manually toggle between inking and erasing. So far the improved organization beats the slightly more difficult erasing.

So far I’ve completely switched my note-taking to the S3; we’ll see if that trend continues or if I eventually get tired of having to make sure I remember to keep it sufficiently charged.

Thankful for my time at Samsung

Friday the 17th was my last day at Samsung Research, and in the spirit of giving thanks I thought I’d mention a few things I’m thankful for from my time there.

First, I’m thankful for the opportunity for more direct hands-on work. I joined Samsung from IBM Research because I wanted to get closer to the product side: despite calling itself “Samsung Research”, most of the organization is focused on advanced product development rather than the publication-focused academic research that typically springs to mind for a research organization. During my time there, first with the UX Innovations Lab and then with the Think Tank Team, I got the chance to design and build multiple new user experiences, products, and services (although sadly I can’t talk about many of them).

Second, I’m thankful for the great people I got to work with. In academic research you typically work with just other researchers and the occasional developer, but at Samsung I got to work with people from all sorts of backgrounds: designers (visual, interaction, industrial), developers, engineers (electrical and mechanical), an architect, a physicist, and more. It was a lot of fun having all of those different perspectives and skills to bring to bear on projects.

Third, I’m thankful for the opportunity to learn new skills. I got a lot of experience building prototypes in Android, and I even had the chance to work more on web services (both the front-end interface and the back-end server). I had the opportunity to take Berkeley’s Engineering Leadership Professional Program (although to be honest I liked IBM’s Micro MBA course better). I improved as a project lead and manager as well, particularly how to lead a multidisciplinary team.

Fourth, I’m thankful for the opportunity to experience a different culture. While we always joked that IBM’s Almaden Research Center felt like an isolated outpost of IBM, that’s nothing compared to working in a small US subsidiary of a large Korean company. It was interesting to see the different approaches to and attitudes about work.

I wish my former colleagues all the best, and I look forward to seeing TTT’s hand in future Samsung offerings. As for me, I joined Google this past week, where I’ll start finding new things to be thankful for.