Skip to content

I occasionally miss cold weather

I’ve lived in the Bay Area for over 10 years now. Overall the weather here is great. I can bike to work every day between roughly May and October and not worry about getting rained on. Even in the winter, the “rainy season”, it only really rains occasionally and I can still bike to work most days. And I can keep running year round, nearly always in shorts, although in winter I often have to break out the long sleeves for evening runs.

But I do occasionally miss cold weather. I partly blame all the clothing catalogs that show up in the mail near the holidays. They’re full of sweaters and fleeces that look seriously warm and cozy. Except they’re designed for people who live places where 30 is the high. Here we do occasionally get close to freezing at night, but we’re usually right back up in the sixties during the day (it’s 67 as I write this).

I occasionally miss snow as well. When it’s falling and/or freshly fallen, snowing is beautiful. Of course, after it’s thawed a bit and then refrozen, and then thawed a bit and refrozen, etc.: not quite so much. And when it picks up sand and dirt from the roads, it loses much of its aesthetic appeal. And here we can always drive to Tahoe or other places in the Sierras if we really want snow.

But I do occasionally miss cold weather.

Taking notes with the Tab S3 vs. iPad + Apple Pencil

Before leaving Samsung, I used my employee discount one last time (in combination with a Thanksgiving sale) to get a Tab S3 to replace my old Tab S. I intended to experiment with using it to take notes and sketch out ideas at work to see how it felt compared to using an Apple Pencil with an iPad.

I’ve been using the Tab S3 for a bit over a week at work now, and so far I really like it. The Apple Pencil does feel a bit better in your hand (it’s slightly heftier, and I prefer the rounded pencil to the squarish S3 stylus), but the S3 stylus feels better when writing. It’s hard to describe exactly, but the stylus is slightly softer, so you get just a bit more friction than you get with the Pencil. It feels more like actually writing, while the Pencil feels more like sliding hard plastic on glass.

I also like that the stylus is passively sensed, so you don’t have to worry about charging it (and let’s face it, the Pencil’s charging solution just looks goofy). And despite being passively sensed, I haven’t noticed any difference in input latency with the stylus versus the Pencil.

From a software standpoint, I’m not thrilled with either first party solution. I like that Samsung Notes lets you use the button on the stylus to quickly erase, but I’m not thrilled with its approach to organizing notes (basically you get folders where each note-taking session as a file). Apple’s Notes is similarly limited organizationally, and since the Pencil has no button you get no quick erase toggle.

I experimented with Google’s Keep, but it really doesn’t support stylus input that well. Notes can have drawings, but each drawing is a separate full page document, so it gets really awkward if you’re taking lots of notes.

For now I’ve settled on using OneNote. You get notebooks, sections, and pages, each page can grow to be as big as you want. The only thing I don’t like about it is that there seems to be no way to assign the stylus button to erase; you have to manually toggle between inking and erasing. So far the improved organization beats the slightly more difficult erasing.

So far I’ve completely switched my note-taking to the S3; we’ll see if that trend continues or if I eventually get tired of having to make sure I remember to keep it sufficiently charged.

Thankful for my time at Samsung

Friday the 17th was my last day at Samsung Research, and in the spirit of giving thanks I thought I’d mention a few things I’m thankful for from my time there.

First, I’m thankful for the opportunity for more direct hands-on work. I joined Samsung from IBM Research because I wanted to get closer to the product side: despite calling itself “Samsung Research”, most of the organization is focused on advanced product development rather than the publication-focused academic research that typically springs to mind for a research organization. During my time there, first with the UX Innovations Lab and then with the Think Tank Team, I got the chance to design and build multiple new user experiences, products, and services (although sadly I can’t talk about many of them).

Second, I’m thankful for the great people I got to work with. In academic research you typically work with just other researchers and the occasional developer, but at Samsung I got to work with people from all sorts of backgrounds: designers (visual, interaction, industrial), developers, engineers (electrical and mechanical), an architect, a physicist, and more. It was a lot of fun having all of those different perspectives and skills to bring to bear on projects.

Third, I’m thankful for the opportunity to learn new skills. I got a lot of experience building prototypes in Android, and I even had the chance to work more on web services (both the front-end interface and the back-end server). I had the opportunity to take Berkeley’s Engineering Leadership Professional Program (although to be honest I liked IBM’s Micro MBA course better). I improved as a project lead and manager as well, particularly how to lead a multidisciplinary team.

Fourth, I’m thankful for the opportunity to experience a different culture. While we always joked that IBM’s Almaden Research Center felt like an isolated outpost of IBM, that’s nothing compared to working in a small US subsidiary of a large Korean company. It was interesting to see the different approaches to and attitudes about work.

I wish my former colleagues all the best, and I look forward to seeing TTT’s hand in future Samsung offerings. As for me, I joined Google this past week, where I’ll start finding new things to be thankful for.

Building an RSS reader

I admit it: I still use RSS readers. On iOS I’m a big fan of Reeder, but on Android I’m still using Press. Sadly Press has been abandoned for years now. It’s still functional, but it’s increasingly dated. I occasionally look at alternatives, but I haven’t found one I’ve really liked. Yes, I know Feedly is popular, but personally its design doesn’t appeal to me. So I’ve started to write my own RSS reader in my free time, using FeedWrangler as the backend service (an obvious choice, since it’s the service I use for Reeder).

We’ll see how it goes; my free time isn’t quite what it used to be. But it’s been fun so far, and I’m using it as a learning experience to try out technologies I haven’t had time to experiment with at work. Currently I’m using it to try out Android’s new Architecture Components; they should make managing the feed data and coordinating data nd UI updates a lot easier.

How not to motivate your voice assistant

Speaking of this year’s Samsung Developer Conference, Samsung’s head of software and services InJong Rhee once again tried to motivate Bixby by touting it as a replacement for touch interfaces. This is not a new line; when Samsung launched Bixby it did so by claiming that voice was a significant improvement over hard-to-use touch interfaces.

I have two issues with this claim:

  1. Claiming that touch interfaces must be hard-to-use because people only use 15% of the functionality of their phone daily doesn’t pass the giggle test. People only use 15% of the functionality of their phone daily because that’s all they need. Give people Facebook, messages, email, and a browser and they’re good most days. That doesn’t mean all the other apps and capabilities are hard-to-use, it means that people only need them in more specialized circumstances. I only rarely use tethering, but that doesn’t mean it’s useless or difficult to use; it just means I don’t need it that often (and when I do need it, I’m really glad I have it available). There’s lots of research showing the advantage of direct manipulation interfaces. Ignoring it makes you seem like an idiot.
  2. Proposing to replace touch interfaces, which tend to be pretty good at revealing their functionality, with a voice interface that gives you no clue what it can do is even worse. If you really believe that people don’t use the full functionality of their phones because they can’t figure out how to do so, why is giving them a voice interface that doesn’t reveal it’s capabilities an improvement? News flash: it’s not. Voice interfaces are worse at communicating their capabilities, not better.

And it gets worse. Voice interfaces can be more efficient than touch interfaces, but they need to be designed differently. You don’t design a voice interface to be equivalent to a touch interface (with the notable exception of designing for accessibility). If the interfaces support equivalent interactions, a well-designed touch interface will be faster. You design a voice interface to provide high-level shortcuts. Think about it: would you rather tell your phone “send a message to my wife that I’m on my way”, or “open messages, start a new message to my wife, enter I’m on my way, send the message”? The former is a high-level shortcut, the latter is touch equivalent. But Samsung seems to think that voice assistants need to be touch-equivalent (or “complete”, as the company terms it).

I’m hoping Samsung makes improvements with Bixby 2.0, but first they need to establish a better motivating premise.

How dare our echo chambers give us what we want?

There’s been a lot of handwringing recently about how social networks might have influenced the outcome of the 2016 election. And don’t get me wrong: the companies involved should have to be clearer about who’s sponsoring posts and advertisements (particular when the sponsors are foreign powers). But much of the outrage strikes me as scape-goating. Facebook is not an objective news source, people; it’s a platform for people to share things. Don’t blame Facebook for not vetting whether a particular post is accurate, blame yourselves. If you want real news, subscribe to a quality publication like the New York Times or the Washington Post. You know, a news organization. If you can’t be bothered to get your news from a reputable news source, then don’t be surprised if you don’t get reputable news.

The Apple Pencil

I write a lot. I take notes in meetings. I write down plans for my team for the next few days and the next few weeks. And I sketch out a lot of research ideas. Yes, I could just type all of those things on a keyboard, but I personally find that I’m both more creative and more thoughtful when writing.

I’ve always been intrigued by the potential of digital notetaking. I tried out the initial versions of the tablet PC when Microsoft first introduced it, but it was always just a little too clunky to replace handwritten notes. I’ve been using a Note 5 for a couple of years, and while it’s really useful for jotting quick notes and reminders (the ability to pull out the stylus and immediately start writing is killer), it’s too small to substitute for writing in a notebook (my current notebook of choice is the Baron Fig Confidant in Flagship size). When Apple released the Apple Pencil I was intrigued, but I doubted I’d get enough value from it to warrant the $100 price tag.

But then I got a pair of emails from United, one notifying me that my miles were going to expire soon and a second notifying me that I could use my miles to purchase Apple products. Since the miles would vanish anyway, I used the opportunity to order a Pencil.

My first experiment with using the Pencil was on Caltrain heading up to San Francisco for the Samsung Developer Conference, jotting some ideas for a project I was planning. The experience was good, but not great; my iPad seemed to occasionally have difficulty with palm rejection, so that it would start scrolling my note in the middle of writing. But the input was responsive and did a good job capturing my handwriting. And the Pencil definitely felt good in my hand.

Taking notes at the conference revealed that the issue I’d experience was not a general one, but was instead related to taking notes while in a moving vehicle. I’d been sitting on the lower level of the train, and while it was stable enough for writing the motion was apparently just enough to screw with Apple’s algorithm. When writing while stationary the palm rejection has been rock solid. Trying to write on the upper level of a train (on the return trip) was a total loss; the algorithm couldn’t figure out if I was trying to write or scroll. Of course, I should note that I have difficulty writing in a regular notebook on the upper level…

Latency on the pencil has generally been very good, although occasionally tracking will briefly hiccup. And once in awhile my iPad will just refuse to recognize the Pencil, but that’s fixable by briefly plugging the Pencil into the iPad (and yes, plugging the Pencil in to charge it is awkward, but it’s not a showstopper). I’m inclined to blame those issues on Bluetooth, but it’s worth noting that Wacom-based styluses don’t suffer from those problems.

In general I enjoy writing with the stylus; it’s probably the best digital note-taking experience I’ve had to date. And once I have the Pencil in my hand, I’ll use it to tap around on my iPad in lieu of touching; it feels good enough in your hand you don’t really want to put it down. But if I had to purchase it with cash I wouldn’t bother; I still prefer jotting notes in a notebook, and I’m not enough of an artist to spend much time drawing. So my search for a great digital note-taking experience continues.