Skip to content

How not to implement activity encouragement

I’ve mentioned elsewhere that I split my time between the iOS and Android ecosystems to keep current on both. That includes watches: I wear an Apple Watch (Nike+ series 2) when carrying my iPhone, and a Gear S3 when carrying my Note. When running I prefer to wear my Apple watch: it’s lighter, and I don’t mind sweating over it or its band (although I prefer the Gear S3 when I want to automatically track my activity, rather than manually).

In the latest version of watchOS, Apple added activity encouragements. At the start of the day, it’ll encourage you to keep closing your rings, or, if you’ve fallen behind your goals for a day or two, raise your game a bit. Normally I don’t mind, although the encouragements have zero impact on my actual activity (most of my exercise comes from running and biking to/from work).

What does drive me crazy, however, is when the watch tells me to raise my game because I had no activity yesterday and I wasn’t wearing the watch. There’s a big difference between activity data showing no activity and a total lack of activity data. And yet the watchOS developers were too lazy (too rushed? Too indifferent?) to bother to distinguish between the two. Regardless of the reason, it turns what’s a fairly harmless feature (encouraging you to keep fit) into something that just makes the watch look dumb. The watch really has no idea what I did on a day with no activity data; I could have run a marathon for all it knows. Encouraging me to be more active when it has no idea what I did just makes it (and Apple) look dumb.

So do yourself a favor: if you’re building an app that encourages users to engage in some activity, make sure you differentiate between an absence of that activity and an absence of data about that activity. There’s a crucial difference.

I occasionally miss cold weather

I’ve lived in the Bay Area for over 10 years now. Overall the weather here is great. I can bike to work every day between roughly May and October and not worry about getting rained on. Even in the winter, the “rainy season”, it only really rains occasionally and I can still bike to work most days. And I can keep running year round, nearly always in shorts, although in winter I often have to break out the long sleeves for evening runs.

But I do occasionally miss cold weather. I partly blame all the clothing catalogs that show up in the mail near the holidays. They’re full of sweaters and fleeces that look seriously warm and cozy. Except they’re designed for people who live places where 30 is the high. Here we do occasionally get close to freezing at night, but we’re usually right back up in the sixties during the day (it’s 67 as I write this).

I occasionally miss snow as well. When it’s falling and/or freshly fallen, snowing is beautiful. Of course, after it’s thawed a bit and then refrozen, and then thawed a bit and refrozen, etc.: not quite so much. And when it picks up sand and dirt from the roads, it loses much of its aesthetic appeal. And here we can always drive to Tahoe or other places in the Sierras if we really want snow.

But I do occasionally miss cold weather.

Taking notes with the Tab S3 vs. iPad + Apple Pencil

Before leaving Samsung, I used my employee discount one last time (in combination with a Thanksgiving sale) to get a Tab S3 to replace my old Tab S. I intended to experiment with using it to take notes and sketch out ideas at work to see how it felt compared to using an Apple Pencil with an iPad.

I’ve been using the Tab S3 for a bit over a week at work now, and so far I really like it. The Apple Pencil does feel a bit better in your hand (it’s slightly heftier, and I prefer the rounded pencil to the squarish S3 stylus), but the S3 stylus feels better when writing. It’s hard to describe exactly, but the stylus is slightly softer, so you get just a bit more friction than you get with the Pencil. It feels more like actually writing, while the Pencil feels more like sliding hard plastic on glass.

I also like that the stylus is passively sensed, so you don’t have to worry about charging it (and let’s face it, the Pencil’s charging solution just looks goofy). And despite being passively sensed, I haven’t noticed any difference in input latency with the stylus versus the Pencil.

From a software standpoint, I’m not thrilled with either first party solution. I like that Samsung Notes lets you use the button on the stylus to quickly erase, but I’m not thrilled with its approach to organizing notes (basically you get folders where each note-taking session as a file). Apple’s Notes is similarly limited organizationally, and since the Pencil has no button you get no quick erase toggle.

I experimented with Google’s Keep, but it really doesn’t support stylus input that well. Notes can have drawings, but each drawing is a separate full page document, so it gets really awkward if you’re taking lots of notes.

For now I’ve settled on using OneNote. You get notebooks, sections, and pages, each page can grow to be as big as you want. The only thing I don’t like about it is that there seems to be no way to assign the stylus button to erase; you have to manually toggle between inking and erasing. So far the improved organization beats the slightly more difficult erasing.

So far I’ve completely switched my note-taking to the S3; we’ll see if that trend continues or if I eventually get tired of having to make sure I remember to keep it sufficiently charged.

Thankful for my time at Samsung

Friday the 17th was my last day at Samsung Research, and in the spirit of giving thanks I thought I’d mention a few things I’m thankful for from my time there.

First, I’m thankful for the opportunity for more direct hands-on work. I joined Samsung from IBM Research because I wanted to get closer to the product side: despite calling itself “Samsung Research”, most of the organization is focused on advanced product development rather than the publication-focused academic research that typically springs to mind for a research organization. During my time there, first with the UX Innovations Lab and then with the Think Tank Team, I got the chance to design and build multiple new user experiences, products, and services (although sadly I can’t talk about many of them).

Second, I’m thankful for the great people I got to work with. In academic research you typically work with just other researchers and the occasional developer, but at Samsung I got to work with people from all sorts of backgrounds: designers (visual, interaction, industrial), developers, engineers (electrical and mechanical), an architect, a physicist, and more. It was a lot of fun having all of those different perspectives and skills to bring to bear on projects.

Third, I’m thankful for the opportunity to learn new skills. I got a lot of experience building prototypes in Android, and I even had the chance to work more on web services (both the front-end interface and the back-end server). I had the opportunity to take Berkeley’s Engineering Leadership Professional Program (although to be honest I liked IBM’s Micro MBA course better). I improved as a project lead and manager as well, particularly how to lead a multidisciplinary team.

Fourth, I’m thankful for the opportunity to experience a different culture. While we always joked that IBM’s Almaden Research Center felt like an isolated outpost of IBM, that’s nothing compared to working in a small US subsidiary of a large Korean company. It was interesting to see the different approaches to and attitudes about work.

I wish my former colleagues all the best, and I look forward to seeing TTT’s hand in future Samsung offerings. As for me, I joined Google this past week, where I’ll start finding new things to be thankful for.

Building an RSS reader

I admit it: I still use RSS readers. On iOS I’m a big fan of Reeder, but on Android I’m still using Press. Sadly Press has been abandoned for years now. It’s still functional, but it’s increasingly dated. I occasionally look at alternatives, but I haven’t found one I’ve really liked. Yes, I know Feedly is popular, but personally its design doesn’t appeal to me. So I’ve started to write my own RSS reader in my free time, using FeedWrangler as the backend service (an obvious choice, since it’s the service I use for Reeder).

We’ll see how it goes; my free time isn’t quite what it used to be. But it’s been fun so far, and I’m using it as a learning experience to try out technologies I haven’t had time to experiment with at work. Currently I’m using it to try out Android’s new Architecture Components; they should make managing the feed data and coordinating data nd UI updates a lot easier.

How not to motivate your voice assistant

Speaking of this year’s Samsung Developer Conference, Samsung’s head of software and services InJong Rhee once again tried to motivate Bixby by touting it as a replacement for touch interfaces. This is not a new line; when Samsung launched Bixby it did so by claiming that voice was a significant improvement over hard-to-use touch interfaces.

I have two issues with this claim:

  1. Claiming that touch interfaces must be hard-to-use because people only use 15% of the functionality of their phone daily doesn’t pass the giggle test. People only use 15% of the functionality of their phone daily because that’s all they need. Give people Facebook, messages, email, and a browser and they’re good most days. That doesn’t mean all the other apps and capabilities are hard-to-use, it means that people only need them in more specialized circumstances. I only rarely use tethering, but that doesn’t mean it’s useless or difficult to use; it just means I don’t need it that often (and when I do need it, I’m really glad I have it available). There’s lots of research showing the advantage of direct manipulation interfaces. Ignoring it makes you seem like an idiot.
  2. Proposing to replace touch interfaces, which tend to be pretty good at revealing their functionality, with a voice interface that gives you no clue what it can do is even worse. If you really believe that people don’t use the full functionality of their phones because they can’t figure out how to do so, why is giving them a voice interface that doesn’t reveal it’s capabilities an improvement? News flash: it’s not. Voice interfaces are worse at communicating their capabilities, not better.

And it gets worse. Voice interfaces can be more efficient than touch interfaces, but they need to be designed differently. You don’t design a voice interface to be equivalent to a touch interface (with the notable exception of designing for accessibility). If the interfaces support equivalent interactions, a well-designed touch interface will be faster. You design a voice interface to provide high-level shortcuts. Think about it: would you rather tell your phone “send a message to my wife that I’m on my way”, or “open messages, start a new message to my wife, enter I’m on my way, send the message”? The former is a high-level shortcut, the latter is touch equivalent. But Samsung seems to think that voice assistants need to be touch-equivalent (or “complete”, as the company terms it).

I’m hoping Samsung makes improvements with Bixby 2.0, but first they need to establish a better motivating premise.

How dare our echo chambers give us what we want?

There’s been a lot of handwringing recently about how social networks might have influenced the outcome of the 2016 election. And don’t get me wrong: the companies involved should have to be clearer about who’s sponsoring posts and advertisements (particular when the sponsors are foreign powers). But much of the outrage strikes me as scape-goating. Facebook is not an objective news source, people; it’s a platform for people to share things. Don’t blame Facebook for not vetting whether a particular post is accurate, blame yourselves. If you want real news, subscribe to a quality publication like the New York Times or the Washington Post. You know, a news organization. If you can’t be bothered to get your news from a reputable news source, then don’t be surprised if you don’t get reputable news.