Our definition of augmentation is too narrow
I’m pretty down on the current, through-the-lens version of augmented reality. The need to set up a new spatial map each time limits sustained interactions, and most prominent use cases aren’t actually that common. Interior decorating! Measuring things! Leaving random notes floating around the environment!
But part of the issues is that people are settling for a very narrow definition of augmented reality. We already have AR technologies that work great. Google Maps, as one example, is a great example of an app that augments your current reality. It’ll provide you with information about what’s around you, and if you want to go somewhere else it’ll help you get there. That’s a really useful augmentation. But it’s no longer novel, so we take it for granted and don’t regard it as that neat.
Through-the-lens augmentation is what people typically think of when they hear AR. That requires (in various incarnations) identifying objects in front of you, understanding the spatial arrangement of those objects (plus possibly the broader space), and determining how to render content to appear to blend in with that space. Neat when it works, but there are a host of technical challenges that haven’t really been solved. Spatial understanding sort of works in certain situations for relatively simple environments. Object recognition sort of works for some objects, particularly visually distinctive ones, ideally with some text on them. Think books and movie posters. Otherwise it’s kind of lousy. Identifying that something is a chair isn’t necessarily that useful. What kind of chair is it? An Eames chair? An Aeron chair? Before you can augment an object, you need to know more than just it’s general type.
I’m actually all for augmenting users’ realities. I just think we should be aiming to build more general augmentations rather than getting too caught up in flashy through-the-lens demos. Useful augmentations, not just neat augmentations.