Skip to content

Memories of Sept. 11, 2001

I was in Seattle with Kate on Sept. 11, 2001. I was finishing writing my thesis, and so was splitting my time between Seattle and Pittsburgh. I have three particularly strong memories of that day. The first was waking up to NPR and fuzzily hearing something about a plane hitting a tower. It took several seconds to make sense of what NPR was talking about, and then Kate and I dashed for the living room to turn on the TV and learn more about what was going on.

The second memory was walking up Queen Anne hill that afternoon and looking out over Elliot Bay to see a warship parked midway between downtown and Bainbridge Island. There was so much uncertainty that day that someone doubtlessly decided that a visible military presence might reassure people, plus that it was better to be safe than sorry.

The last memory was of how quiet it was in the city on that walk. There weren’t any planes in the sky, and almost everyone was staying home rather than going to or from work or running errands.

I have one more strong memory of that time, from when I fly back to Pittsburgh for the first time afterward. It was a redeye flight, and the airport was deserted. There were maybe 10-15 people on my flight. Everyone was very quiet, they sat by themselves, and they looked at the other passengers. If I had to choose one word the mood, it would not be “nervous” or “concerned”; it would be “aware”.

Looking back 10 years later, I can’t help regret lost opportunities. The lost opportunity to come together as a nation, when we now seem to find ourselves even further apart. The lost opportunity to focus on freedom rather than fear (as Bruce Schneier put it, “Refuse to be terrorized“). And the lost opportunity to spend some $3.3 trillion on way that would better benefit the nation (our infrastructure, our schools, science, job training, etc.).

Stockholm

I was in Stockholm last week for Mobile HCI 2011. I have to confess that initially I was less than enthusiastic about visiting Sweden. Montreux in Switzerland for UIST 2006 was awesome. Paris (UIST 2002) is always fun. Salzburg in Austria for Mobile HCI 2005 was a blast: I’m pretty sure I spent more time walking around the city than attending the conference. But what was interesting about Sweden? Plus I was partially put off by my last trip to Sweden: at Mobile World Congress in Barcelona they spent so much time warning you about thieves and pickpockets it was impossible to relax and enjoy yourself. Hey MWC, if thievery is such a problem in Barcelona stop holding the conference there! This is not rocket science, people.

But I’m pleased to admit that I was totally mistaken about Stockholm. It’s now one of my favorite European cities, and I’m looking forward to a chance to visit again.

So what was great about Stockholm? First, public transit is extremely convenient. The Arlanda Express whisks you from the airport terminal straight into downtown in 20 minutes, and the central station is conveniently located (in my case a short walk from my hotel). Plus the train itself is wood-paneled and feels a bit like it was decorated by IKEA. Awesome!

Second, I love cities that make effective use of water. And since Stockholm is situated between a lake and the Baltic Sea across a series of islands, there’s plenty of water (and bridges, and boats) to enjoy.

View of Gamla Stan

Near the Parliament building

Third, the city is both extremely walkable and well-designed for bicycling. There are bike lanes everywhere, and the bike lanes even have their own little traffic lights! Of course, I do have to confess I have to wonder how walkable and bike-able the city is come November…

Fourth, the city has done a great job preserving the historic old city, Gamla Stan. Since the US doesn’t have a lot of really old buildings, one of the things I always enjoy from visiting Europe is seeing some of the historic sections of cities. I spent parts of several days walking down the narrow cobblestone streets of Gamla Stan (the path I walked to the conference center allowed me to walk through Gamla Stan on the way there and back, and I varied my path each day), and I could have happily spent several more.

Gamla Stan town square

A winding street in Gamla Stan

Fifth, almost everyone you encountered spoke English. Which is handy to those of us that are barely fluent in other languages (those years of high school Spanish notwithstanding). I have to confess that I’m not particular fond of floundering in a foreign country where I don’t speak the language at all and the people you encounter don’t speak the language. Kyoto, Japan, for example, was fantastic to visit: I really enjoyed the glimpse into a culture much different than the West. I would have totally been lost, however, without the colleague who spoke fluent Japanese.

Sixth, good coffee everywhere you turn. The Europeans just appreciate a good cup of coffee more than Americans seem to.

In short, this trip more than made up for the visit to Barcelona in February. Stockholm was fantastic, and I look forward to seeing more of it on a future trip.

Incentives for Innovation

I attended Mobile HCI in Stockholm, Sweden last week. My research group had two accepted papers, both from interns (Kim Weaver and Patti Bao) who had worked with us last summer. While sitting through yet another study of users interacting while walking it struck me that the proceedings were full of Science (lots of scientific studies examining, often in minute detail, how users interact with mobile devices), it was very short on Innovation: they weren’t really any new applications or services that researchers had created. (And I’ll note that my team was guilty of contributing to that tilt: our papers were studies, not systems, because it’s easier for interns to concentrate on studies with only 3 months available.)

While it’s entirely possible that I’m just getting cranky with my advancing years (hey you kids, get off of my lawn!), I encounter more and more research papers that are studies of what other people have built rather than descriptions of things the researchers themselves built. And after thinking about it a bit in Stockholm, I think a large part of the reason comes down to the incentive structures we have in place.

What’s the incentive for doing really innovative work in research? You can publish a paper. What about if you do mediocre that’s an incremental addition to the field’s knowledge? You can publish a paper. Funny how whether you’re doing awesome work or mediocre work the outcomes are strikingly similar. And what’s the worst that can happen? Your paper gets rejected.

Lets contrast that with the start-up scene in Silicon Valley. What’s the incentive for doing really innovative work at a start-up? Your company becomes the next big thing and you make a bazillion dollars. What if you do mediocre, incremental work? Your company vanishes into the dustbin of history. What’s the worst that can happen? Your company goes down in flames and you need to look for a new job.

Call me crazy, but given that the incentives to be innovative are so much stronger in industry, it’s hard to be surprised that industry is now really driving the innovation in CS. In fact, I’d suggest that the disparity in incentives and outcomes raises the question of how long academic research, as it currently stands, will be regarded as valuable and interesting. The writing is arguably already on the wall in industry: existing industrial computer science labs are slowly disappearing, and new computer science labs aren’t really taking their place. So what is the future of academic research? Will we reinvent and reinvigorate the field? Or will the drift of the innovative center of the field continue its shift from academia to research?

Computer scientists and time

Computer scientists have a very interesting relationship to time. Obviously time is a critical component of many applications of computing, whether it’s the running time of algorithms or computing behavioral models based on observed actions over time. And yet computer scientists themselves don’t seem to be very good at time.

Case in point: the computer science “5 minutes”. Computer scientists seem to have an unshakable faith the almost any discrete computing task can be done in 5 minutes. When I was a grad student it was a running joke that you could tell which significant others of lab mates were new because they actually believed the person they were dating when they called up and asked “Are you almost ready to go?” and got the response “Sure, I’ll be ready in 5 minutes”. The significant other would then show up at the lab and spend the next 30 minutes twiddling their thumbs waiting for person to actually be done. More experienced significant others would call and then just wait 30 minutes before actually showing up.

The computer science “5 minutes” occurs for almost any task that can be associated with a “Are you ready?” or “Are you done?” question. “Are you ready?” “Sure, let me just finish adding this feature. It’ll just take 5 minutes.” or “Sure, let me fix this one last bug. No more than 5 minutes.” I think it’d be an amusing CHI paper to study whether computer scientists just have overwhelming faith in their programming abilities (It’s not hard! Surely I can finish it in 5 minutes.) or whether time just disappears while their coding, such that they lack an appreciation of how long programming tasks really take. I suspect the latter.

Another common examples ties to our ability to manage time. If you ask a computer scientist to perform a task with a short term deadline (say, this week or next week), the answer will almost always be no: we inevitably have too many things on our plate and just can’t handle one more. So how do you get a computer scientist to take on a large, time-consuming task? Easy: ask them 3 months in advance. We appear to have a limitless faith that somehow in the future we’ll have much more free time. Why we retain this belief in the face of overwhelming evidence is unclear. Perhaps we somehow expect that the projects we wind down will not be replaced by new projects, despite the fact that has never in our lives happened. Or perhaps we expect that in the future days will somehow contain more hours or hours will get slightly longer. Or maybe our faster computers and better user interfaces will make us that much more efficient.

Regardless of the cause, if you ever need a favor from a computer scientist that involves an investment in time, just ask for it 3+ months in advance. And never believe a computer scientist when they say they’ll finish what they’re working on in 5 minutes.

I went to Hawaii, and it was hard to come back…

We went to Oahu’s North Shore for a vacation last week. Great time, gorgeous location, hard to return.

IMG 2903

Long vacations in beautiful locations always make me think of Thoreau:

I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.

In particular, would it be possible, if I so chose, to retire with what I have now in a beautiful location (such as Hawaii) and live a simple but intellectually rich and fulfilling life? How much would I actually need?

I haven’t done the calculation too seriously (for one thing, it’s tougher to suddenly live the simpler life when you have a child), but I think it actually would be doable. I figure shelter, food, electricity, a good library, and a moderately fast Internet connection (naturally) are all I’d really need. Food and shelter to keep me alive. A library for intellectual engagement and entertainment. And electricity and an Internet connection to allowing programming just for fun.

While some folks argue that our quality of life is declining over our parents, I have to confess that I wonder whether that’s a reaction to more things and more choices being available to people (and thus the need to forgo more things and choices), rather than a reflection of an increase in the cost of our core goods. How many of the things in our lives do we really need vs. just desire? My impression, without actually calculating, is that our core needs are actually more affordable (adjusted for inflation) than they were for our parents (housing prices in Silicon Valley notwithstanding). Good fruits and vegetables are pretty inexpensive. And library cards are still free.

But I’m back. And I haven’t pulled out my calculator to really do the math, and I won’t be building my cabin in the woods quite yet. But I may go hunt down my copy of Walden and give it a re-read.

Yes, research value isn’t clear cut, but…

Since I posted a few thoughts about the returns on research that arose from Sam’s talk at Almaden and then immediately vanished off on vacation, I didn’t have much time to respond to the comments that a number of folks made. So I thought post-vacation I’d quickly follow up with a few additional notes.

First, while Sam’s comments were phrased as looking for the monetary return from research, his comments were in the context of a for-profit company and I largely continued that context in my discussion. But a number of folks correctly noted that not all return is necessarily monetary (and arguably funding research that benefits society in a way that doesn’t directly tie to monetary return is one of the reasons government does (and should) fund research (and am I the only person who finds nested paretheses in prose amusing?)). As a graduate student I helped out with the creation of Alice, and I’d argue the benefit it provides to society by making programming more approachable has provided more than sufficient return for the government funding it received.

Of course, that leads to a second common comment: it’s hard to recognize the value of research when it’s being undertaken. And while that can be true, I think it’s also used as a dodge. Sometimes I think the writing is really on the wall. I worked on interactive 3D graphics, and more specifically, at the height of the virtual reality boom. Virtual reality and VRML were going to conquer the world, man. The web was totally going to be 3D. I (and others) spent lots of time creating cool new 3D interaction techniques and publishing them. There was just one little problem: no one really had any big, compelling uses for VR (yes, there are some smaller cases where it was and is useful, but they’re few and far between). I think we all sort of suspected, particularly late in the 90s, that the emperor had no clothes, but no one was really willing to stand up and say it. We just all sort of drifted off into other areas.

So I think I’m justified in arguing that sometimes we need to be more careful to identify the value we’re providing from our research: been there, failed to do that. And I don’t think virtual reality was the last time the problem arose. To pick a single example, how many tabletop research papers has the community published where the applications for the presented hardware or interaction techniques are re-arranging pictures? Answer: way too many.

In short, yes it’s hard to identify the value of our research. There are different types of value, and it’s often hard to see up close and over the short term. But that doesn’t mean we shouldn’t be trying to do a better job of it. And sometimes we do get it right. Of all the work I did during my graduate work, I was (and still am) most proud of the work I did with Ken Hinckley on sensing techniques for mobile devices. Microsoft might not have leveraged it much, but in our own small way I think we helped lead the way to the current generation of smartphones.

Farewell to the TouchPad and Pre (and webOS)

So while I was off on vacation last week HP decided to pull the plug on webOS hardware and is looking to spin off the PC division. And frankly I suspect in practice this is farewell to webOS as well; it’s hard to see how it gets any traction without a focused hardware provider behind it (heck, it had a enough trouble getting traction with HP behind it). And while I suspect Samsung and HTC are rather pissed off at Google acquiring Motorola Mobility, I don’t see them running to webOS to hedge their bets (although I could see Windows Phone getting an uptick.

I have to confess that I’m sad to see the TouchPad and Pre (and webOS) go. I actually like the design of the webOS UX; I think Palm’s designers did some excellent work, and I frankly think the UX for webOS is better than that for Android in many ways. I’m not, of course, sad enough to rush out and buy a $99 TouchPad, although I will confess that I came close. But $99 is a bit much for something I’d run once awhile for the nostalgic rush; if I want that I can just run the emulator for a few minutes. I therefore don’t really get the insane demand for the reduced TouchPads; while the initial demand might have been from those folks who also regarded webOS with fondness, there’s been such demand that now it feels a bit like tulip mania. I can’t help but wonder if in a few weeks we’ll see a glut of TouchPads on ebay priced well below $99. Time will tell.

Of course, while I’m sad about the fate of webOS, I’m amused that HP is spinning off the PC division and looking to move more strongly into services. That strategy seems vaguely familiar. Oh yes, that’s right: it’s what IBM decided to do way back at the end of 2004. Kudos to Sam and Co. for seeing that trend well in advance and acting on it.

When am I going to get my money back?

Sam Palmisano recently gave a talk at IBM Research’s Almaden lab. One of the questions that Sam got asked was whether he was supportive of basic research. I found Sam’s answer very interesting. He replied, in essence, that he believed in basic research, but that as CEO his concern was what he was getting for his investment. “Ok, I’ll give you $20 million for this long term research project. But I want to know, when am I going to get my money back?”

I found Sam’s answer interesting because there seems to be an unfortunate trend among some researchers to assume that they are somehow owed (by society, their company, whoever) funding for their activities just because they somehow fall under the banner of “scientific inquiry”. Science is absolutely a worthwhile activity. However, just because science is worthwhile does not mean that society should be writing a blank check for any scientific activity.

I think academia is partly to blame for this attitude. Academic research is essentially its own little industry that turns funding from various organizations into papers. At some point along the line, too much of measuring impact and contributions in academic research turned into paper counting rather than considering actual impact on society. While from a scientific perspective publishing papers is laudable (“We’re advancing the knowledge of mankind!”), as a return on investment I’m unconvinced that it merits the resources expended. To its credit I think that the NSF has the same impression, since they seem to be exploring mechanisms for getting a better return on their investments.

I think industrial research does a better job at providing value, because companies are for profit entities that at the end of the day exist to provide value to their shareholders. If a company’s research investments don’t generate sufficient value, somewhere along the line the company’s officers and shareholders are going to start questioning the return on those investments.

I would argue that the challenge for industrial researchers is to balance their scientific inquiries (advancing the state of the art) against the needs of the company they work for. This challenge is not helped by the attitude among some people, which I frankly do not understand, who argue that research efforts targeted at a company’s needs are merely “advanced development”. However, the last time I checked, a good definition of research was:

RE·SEARCH: NOUN: 1. a detailed study of a subject, especially in order to discover (new) information or reach a (new) understanding.

That definition doesn’t say anything about publishing papers in academic conferences; that’s one outcome of some research activities, but it’s not the definition of research. Frankly, the definition doesn’t say anything about needing a PhD or working in a research lab either, and I would argue that there are companies who better research than many universities without a bunch of PhDs or labs. So I don’t think there’s nearly as strong a division between “research” and “development” as some seem to think, and I don’t think researchers are doing themselves any favors by trying to draw strict divisions between what is and is not research.

At a time when research budgets look to continue declining due to economic circumstances, I think researchers should take the opportunity to re-examine the problems that they tackle and that value that they provide to society. We certainly have no shortage of problems that require solutions. But we may have enough academic publications that exist primarily to add to publication counts and pad out CVs. Perhaps we should all be considering the investments we have received in our training and for our past and current research projects and ask ourselves whether our funders have received their money back. And if they haven’t yet, when they will.

Google Music, Spotify, and Country Music

I got a Google Music invite way back at Google I/O 2011. I set it up on my Mac, let it upload a few songs, then more or less pulled the plug on it. The basic issue I have with it is that it solves a problem I don’t have: listening to my music on any of my computers. These days I really only listen to music in three places:

  1. While at home (in which case I usually listen on the Mac mini hooked to our stereo)
  2. While traveling (in which case I usually listen on my iPhone or iPod shuffle)
  3. While at work

The latter was initially a potential use case for Google Music, but it wasn’t worth the hassle of uploading my whole library. I could have copied all my music to my work computer, but didn’t want that many GBs of personal content on it. In the end I ended up appropriating an old 30 GB iPod and using it as my jukebox at work (hooked up to speakers).

As a result, I played with Google Music for a few days when I first got access to the beta, and then more or less stopped using it. And even when I was playing with it, most of the listening I did was to the free seed music that Google automagically added to my account for me.

A few weeks ago I got a Spotify account out of curiosity and also started playing with it. It was an immediate hit with my daughter: she suddenly had access to all the music her friends listened to that my wife and I didn’t have in our collections (we seem to be strangely short of teenybopper music; go figure). However, it took a bit longer for me to start using it for much beyond digging up songs from my youth I hadn’t heard in forever or listening to songs I’d heard and liked but weren’t over the threshold to actually buy.

What finally started me using Spotify more seriously was the decision to explore country music. Growing up in upstate New York, I turned my nose up at country music. Rock, alternative, and pop were all ok, but country was for hicks. But as I got older, I got more into folk and started appreciating musicians who are actually good at playing music. Tired of pop and rock artists who rely on electronics and would be lost attempting to play an acoustic set, I decided it was time to actually give country a chance. And Spotify is extremely handy for exploring a new genre: all of the music from it you might want to hear at your fingertips.

In fact, Spotify is so good for exploring new music (listen to whatever you want without commitment), I’m surprised it doesn’t integrate better tools for exploring music. Sure, you can look at your friends’ playlists or search out playlists and suggestions from 3rd party websites, but that process is somewhat cumbersome. And Spotify does offer an Artist Radio feature, where similar to Pandora you can pick a seed artist and then hear music by them and similar artists. But you still need that initial seed.

In the end, I found Rolling Stone magazine a more useful place to start. We got a free subscription after attending the San Francisco Symphony July 4th performance at Shoreline Amphitheater, and I’ve taken to leafing through it quickly looking for recommendations to plug into Spotify. A few initial leads there plus Artist Radio has led to some fun new discoveries. So Spotify’s huge library plus Artist Radio is great, but there’s room for improvement in helping users looking to discover new artists and music get started.

Looking forward, I suspect that Spotify and iCloud will be what I settle on for listening to music. I’ll leverage Spotify to discover new music and listen to songs I like but don’t want to buy. And then iTunes + iCloud will make it easy to access music I decide to buy from my various computers. At least until their feature set improves, I really don’t see myself using services like Google Music or Amazon’s Cloud Drive and Player.

Limited Kindle book organization mechanisms

While in general I’m a big fan of Amazon’s Kindle platform (both the hardware ereader and their various smartphone and tablet applications), I have to confess that I find it puzzling that Amazon provides so few ways to help users organize their books. You can sort books by author, title, or read / download date. And on the latest Kindle hardware you can create and place books in collections.

But beyond that, there’s not much you can do. No automatic organization by genre. No ability to tag and sort by tags. And no support for the feature I most want: separating books into read and unread. I occasionally take advantage of free Kindle books that Amazon offers, and as result I’ve accumulated a fair collection of books that I haven’t yet had time to read. However, there’s no easy way to quickly identify those books from within my larger Kindle library. That seems like such an obvious oversight I’m surprised Amazon hasn’t addressed it yet. If we have unwatched/partially watched/fully watched for movies and podcasts in iTunes, why can’t Amazon provide similar functionality for books?

I could understand that Amazon might not want to introduce the additional complexity into the UI for their own dedicated hardware ereader (although they’ve already introduced collections, so providing default read/unread collections isn’t a big step; and tags are just a special kind of collection), but I’d think that their phone and tablet applications could handle the slight addition of complexity with ease. But if anything, their mobile apps are less capable than their hardware (I’m pretty sure you can’t even create collections in their mobile apps).

I haven’t done as much experimentation with Apple’s iBooks or with Google’s new Books offering. Anyone know if they offer better mechanisms for organizing your ebooks?