The increasing consumerization of IT is a popular meme these days. Consumers have access to and employ more cutting-edge software and hardware than is typically provided by the companies they work for, and thus they are more likely to employ their personal hardware and software for business uses (rather than the previous pattern, where people were more likely to also use their business hardware and software for personal activities.
By and large companies are receptive to this trend. Many of them, such as Citrix, are actively exploring having employees bring their own computers (ByoC) as a source of cost savings. Giving employees a computing stipend and having them choose and maintain their own computer is cheaper, these companies believe, than buying and maintaining it for them.
This trend is particularly strong for smart phones (and tablets), because companies typically provide smart phones for only a small subset of their employees. The increasing popularity of smart phones in the consumer space means that more and more company employees have highly-capable computing devices in their pockets that they might use for business with minimal cash outlays from their employees. More employee productivity at minimal cost.
However, there’s a small problem. Companies like their systems and infrastructure to be secure, so they’d rather not have unprotected (and occasionally lost) smart phones wandering around storing company data and serving as access points through the firewall. So what do companies do if they want to protect themselves while also leveraging their employees’ mobile devices?
Currently companies don’t have a lot of choices. They can put security around individual applications or web services, but that doesn’t help much if the device itself is compromised. They can lock out the devices completely, but then their missing out on potentially low cost productivity gains. Or they can leverage what security mechanisms are provided by the devices, which means requiring a device password.
While the latter might seem like a good idea in theory (shouldn’t users want to protect their own personal information on their devices?), it’s problematic in practice. Companies tend to want longer and stronger passwords (8+ characters, alphanumeric) than users (4-digit pin). It takes most users 5-10 seconds to type in an 8-character password, and if you’re only using your phone for 50-60 seconds that’s a significant % of your interaction time (unlike with laptops, where the time required to authenticate is amortized across more sustained use). As a result, we’ve seen that the vast majority of users (~85%) either avoid corporate device passwords, or try them out for a short period and then give up on them.
The problem isn’t just the time required to authenticate, it’s that most smart phone uses are non-business uses. A person only has to say “hold on, I have to log into my phone before I can take your picture” so many times before they give up on the password (and thus business information access).
The way toward a solution seems straightforward: allow sandboxing of groups of applications and their information at an operating-system level. Then let businesses leverage that capability to create a business sandbox with strong security on users’ personal phones. People are generally more than willing to put in a password to access business functionality: it’s the death of 1,000 cuts of authenticating to check the weather or see where the train is that kills them.
Unfortunately, none of the mobile OS providers really seem to be pushing that hard on this space. That’s understandable for Apple: they’ve always been a consumer-focused company. But I have to confess that I had higher hopes for Google; after all, they at least have an enterprise division that’s in theory focused on offering services to companies. But to date Apple has actually gone further than Google in providing at least some enterprise support in their mobile OS.
Until Google raises its mobile game for enterprises (or until Apple gets there, which isn’t entirely impossible – they are further ahead at this current point in time), there are at least some signs that other 3rd parties may be starting to fill the gap. Enterproid‘s Divide platform is the most intriguing I’ve heard about: they allow you to create separate work and personal profiles for your phone and then switch between them based on your needs and context. I haven’t had a chance to try it, though, so I have no idea how well it really works in practice (and well it actually secures the business profile). But it seems like at least a step in the right direction, which is more than I’ve seen so far from the OS providers.
Until we see more progress in the space, I suspect the use of personal phones for business will remain more of the exception than the rule.
One of the drawbacks to working in “mobile computing” is that “mobile” is an ambiguous term. Most times when someone refers to mobile computing these days, they are actually referring to smart phones. However, tablets are also mobile and exhibit different usage characteristics than smart phones, talking generically about mobile user experiences and mobile interaction when you really mean smart phone UX and interaction is problematic. And of course laptops are mobile too, with usage characteristics that differ from both smart phones and tablets.
And it gets even trickier. Not all tablets are alike. Having played with both 7″ and 10″ tablets, I actually think they exhibit their own individual usage characteristics. And for that matter, I think there are even subtle differences between 4:3 aspect ratio tablets and 16:9 tablets (the former are, I think, better for browsing websites and text, while the latter are better for watching 16:9 movies).
Rather than discussing the ambiguous “mobile computing”, then, I think we should really discuss tablet computing. Let’s be honest: smart phones these days are really just small tablets that happen to also make phone calls. They’re touch-sensitive computers first, phones second. If we just consider them to be tablets, we can move beyond the whole phone vs. tablet thing and instead consider the impact of size on tablets.
I think there are at least three classes of tablets, each with different usage characteristics:
- 3-4″ tablets, which are today’s smart phones (and iPod Touches). They’re small enough to carry around in pockets, so they’re the tablets people are most likely to have with them. However, their small form factor means that it’s tough to get “real work” done with them, so they tend to lend themselves to frequent, intermittent, and short interactions.
- 7″ tablets. The Nook and Samsung’s original Galaxy Tab fall into this class. They’re still pocketable, but the pocket is likely to be a suitcoat or jacket pocket (or back pocket, provided you’re careful before sitting down). The main advantage of this class is that they’re very easy to hold in a single hand, so they’re very comfortable to read from or to use when walking around. However, viewing larger amounts of information (such as the NY Times desktop web page) tends to feel somewhat like using a smartphone for the same task: you feel like you’re viewing the information through a small porthole, so you pan and zoom a lot. I have to confess I originally thought no one would want a 7″ tablet, but after playing with one for awhile now I think the ease of holding it in a single hand is not to be overlooked.
- 9-10″ tablets. Yes, the iPad 2 is lighter. But these larger tablets are still more about use while seated where you can bring both hands to bear. Viewing larger amounts of information is more comfortable (I can easily browse the NY Times web page on a 10″ tablet), and this is my preferred form factor for viewing videos/movies and browsing the web. But for sustained reading (such as an ebook), I personally find this larger form factor a little too cumbersome.
Those categories may not be exclusive; we may find other form factors that are useful for slightly different cases (or even better suited for these existing cases). But so far, my basic rule of thumb is that 3-4″ is for ubiquitous access to information, 7″ is great for reading, and 10″ is great for browsing the web and watching movies. Mapping those patterns to business use cases is left as an exercise to the reader (and this researcher, of course).
Regardless of what categories of tablets we finally end up with, I’d like to encourage everyone to move away from “mobile computing” and toward “tablet computing”, at least when focusing on the use of particular types of devices (i.e., smart phones). Mobile computing should be about computing while mobile regardless of device type (and should include laptop use). Tablet computing should focus on interaction with touch-sensitive tablets, and should explicitly include an exploration of the impact of different tablet sizes and form factors.
In my previous post I argued that what Chrome OS really needs is for websites to move closer to web apps, stealing a page from mobile apps to provide more compelling user experiences. And while I still think that the development of technologies to create compelling mobile web apps might help push that transition to compelling desktop web apps, I should note that there’s a potential sticking point.
A fair number of desktop web services provide APIs so that 3rd party developers can create compelling desktop and mobile native applications that leverage those services. The web service providers benefit because developers essentially provide free labor, while users win because of the competition between developers to provide compelling applications for a given service. Witness the wide variety of 3rd Twitter clients, some of which have were subsequently acquired by Twitter.
However, there isn’t a similar culture for web apps: you just don’t see developers creating web apps or websites that wrappers for other websites. As a result, users are more or less stuck with using the web interfaces provided by the developers of the web service. I’ll use Google Reader as an example, since I can’t stand Google Reader’s desktop interface. Sorry, but I avoid it like the plague. But while I don’t like the Google Reader web UI, I use both NetNewsWire and Reeder, which are essentially native interfaces (both desktop and mobile) to the Google Reader service. I benefit from the competition among native application providers. Sadly, on the web interface side there just isn’t that same competition, so in Chrome OS I’m stuck with using the interface that Google itself provides. In practice that means I avoid reading my feeds on Chrome OS.
I don’t think there’s a technical barrier to developers competing to offer alternate web interfaces to existing web services; in theory they could use the same APIs that native developers leverage. But creating such competition will require creating the perception that there’s a sufficient market for alternate web interfaces (and to Google’s credit, the Chrome App Store does in theory provide a mechanism for making money off of such interfaces). And actually demand behind that perception would obviously be useful too. In practice, I suspect that it may also require a change in the terms of use provided by web service providers.
Bottom line, better technologies for creating more compelling web apps are only part of what’s needed to make Chrome OS more appealing. The other thing we need is more competition among developers to provide alternate web interfaces to backend web services. Or for the providers of those services to raise their own game, but I’m not sure I’ll hold my breath on that one.
After playing with my free Google I/O Chromebook for a week now, I’m more positive than I was initially. I’m still not at all impressed by Samsung’s hardware; it feels cheap and plasticky, and with a $500 price tag that’s not a good thing. The Lenovo S10 I picked up to play with over a year and a half ago was cheaper but feels more solid.
But I do think the OS has potential; having explored device collections for awhile now, it’s an interesting point in the space to have a device that’s designed explicitly to rely on storing information in the network. But while I think Chrome-the-OS has potential, the biggest issue I have with it is that most websites just don’t offer user experiences that are as compelling as those offered by native applications. It’s not that they can’t, it’s just that most desktop websites haven’t been designed to provide that type of experience. They’re websites, not web applications.
I can use Google itself as an example: I find their smartphone and tablet Gmail web applications to be much more compelling than their desktop version. However, those same examples demonstrate that better web applications are possible for Chrome. To my mind the biggest question around Chrome is whether designers and developers are willing to spend the effort to create compelling web applications, particularly when they could instead allocate their resources to building native iOS or Android applications. The best chance for Chrome may actually be mobile web efforts such as dojox.mobile and jQuery Mobile; by raising the game for mobile web apps, they may also show the way to creating more compelling web applications for desktop browsers and web operating systems like Chrome.
One of the issues with designing mobile user experiences is that too often people focus just on the individual mobile device. While the mobile device is obviously the primary focus, in the developed world people typically interact with more than just a single device. They have laptops, desktops, netbooks, tablets, and other devices that complement their smart and feature phones. While “digital convergence” was a buzzword in the late 90s and early 00s, we’re actually headed in the other direction: more, and more heterogeneous, devices.
So why are those other devices relevant to the design of mobile user experiences? Simple. Many activities that users engage in actually span their devices. Email? People will check it on their phones, but they tend to handle actions around particular messages on their desktops and laptops. Feed readers? People skim summaries on their phones, but defer reading articles to desktops and laptops. Shopping? People check prices and availability on their phones, but defer making purchases from e-commerce sites until they reach a laptop or desktop. And I could go on.
In short, it’s important to think about how the mobile application or service you’re designing will fit in with (and potentially change how you think about) the desktop version of the application or service. There are implications for how you design your mobile interface (e.g., how will people mark items they want to follow up on / complete once they reach their desktop?), how you design the backend (or cloud service, if we must be trendy) to capture state about suspended or interrupted activities, and how you rethink the desktop interface to account for work people have already accomplished on their mobile devices.
A former colleague summed up this idea with the pithy “no smartphone is an island” bumpersticker, and we’ve seen it again and again in our work. Thinking just about the mobile device is insufficient; to be really effective a design needs to consider how the mobile interface fits into the larger ecosystem. That’s one of the reasons I’ve started to prefer talking about “mobile user experiences” rather than “mobile user interfaces”. A mobile user interface is tied to the mobile device. A mobile user experience, by contrast, fits the mobile device into a larger user experience where a user’s activities span devices.
I’ve been using my iPad for nearly a week now. Overall I’m impressed; the quality of the design (both the hardware and the OS) is very high. And some of the 3rd party applications are just stunning. Developers are really taking advantage of the larger screen (compared to the iPhone) in cool ways.
In thinking about what role the iPad really plays, I’ve found myself considering temporal zones rather than particular tasks. Yes, you can surf the web on a smart phone, iPad, laptop, and desktop. But when do you choose to use those different devices? When thinking about how I use my devices, I think the categories that make the most sense are Morning, Day, and Evening.
Morning devices are devices that you use after you wake up and before you arrive at work. Interactions with these devices tended to be squeezed into our morning routines: exercising, showering, dressing, eating, dropping the kids off at school, commuting. All of these tasks tend to be completed under time pressure, so there isn’t a lot of time to spare sitting around waiting for Windows to boot. Instead interaction is crammed into those moments while you wait for the coffee to brew or your bread to toast. A smart phone fits this pattern perfectly: you can get at it quickly, start using it immediately, and easily interrupt what you’re doing and stuff it back in a pocket when your routine needs to move forward again.
Laptops and desktops are day devices: the always-on tool you spend most of your day sitting in front of or near. The large screen and full keyboard of these devices help accomplish your content creation tasks more quickly (you are creating content at work, right, not just passively consuming it?). The sustained interaction amortizes the pain of their slow boot times, and the fact that you can put them on your desk with a nearby power outlet means that weight and power consumption aren’t really issues.
Evening devices are the ones that you pull out and relax with once the kids are in bed. They’re for catching up on personal email, surfing the web, catching up on news, watching videos, reading books, and playing games as you kick back and relax from the busy day. And that’s what the iPad is perfect for. You can settle onto your couch or a comfy chair without the size and weight of a laptop (odds are you won’t be inputting much text), and it’s easy to position the iPad to comfortably consume content (I don’t know about you, but I find it hard to kick back and relax with a laptop – it makes me feel like I’m working).
Now obviously that’s somewhat of a simplification. Not everyone’s days match that model, and as we’ve learned from our smartphone studies device use can change drastically when people travel. And I didn’t say anything about use on weekends or when people are out and about during the day. But I think the notion of considering how our devices fit into our temporal zones may prove useful as we look at how to design user experiences for the increasingly heterogeneous collection of devices that users employ.
So I’ll confess that I pre-ordered an iPad. While I could write a whole post on techno-fetishism and the desire for the new-new, I thought I’d instead muse on something different: personal hotspots. A personal hotspot is a small device (typically about the size of a deck of cards or smaller) that acts, in essences, as a WiFi->cell network router for physically proximate devices.
Why have I been thinking about hotspots? Well, when I went to pre-order my iPad I had to decide whether to go for the WiFi only version or the WiFi+3G version. And in the end I went with the WiFi only version because:
- I abhor the notion of allowing AT&T to nickel and dime me for every single computing device I might want to use to access the Internet through a cell connection. Given that I’m unlikely to use my iPhone, iPad, and laptop all at the same time, why should I pay for their cell data connections separately?
- I’m not sure I’m likely to ever really use a 3G connection on an iPad given that I already have an iPhone I can use for data access while mobile if absolutely necessary.
- WiFi connections are increasingly pervasive. I could always pay for monthly access to Boingo‘s 100K+ hotspots if I decide I do want better Internet coverage while traveling.
- Personal hotspots like Verizon’s 3G MiFi and Sprint’s 3G/4G Overdrive are dropping in price and allow multiple devices to access the Internet via the cell network without needing 3G or 4G hardware; all they need is good ol’ fashioned WiFi.
The last is the item I find the most compelling. Want to access the cell network from a variety of devices (iPod Touch, iPad, netbook, laptop)? Skip the added expense of 3G hardware and cell data service for each and use a personal hotspot for all of them.
Want to use an iPhone but don’t want to get locked into AT&T? How about an iPod Touch + Verizon’s MiFi? Want to try out 4G data rates without buying new hardware for all of your devices? Get a Sprint Overdrive and use them all at 4G speeds.
Obviously personal hotspots aren’t perfect solutions; there’s still the small matter of incoming calls for mobile phones. But for devices that will be primarily used to pull information (iPads, netbooks, laptops), they seem like a better solution than integrated cell hardware and device-by-device service expenses. And for outgoing calls there’s always Skype or other VOIP services.
So my interest is purely speculative; I haven’t yet bought a hotspot and don’t plan to the immediate future. But I think they’re an increasingly compelling alternative for mobile Internet access that I may indeed utilize in the not-too-distant future.
Once upon a time, computers were big and expensive. Companies owned them. People used them for work.
Over time, computers got a little less expensive, and so some people bought and owned them. But they weren’t really portable, so they lived in people’s homes and people used them for personal things.
Then computers got even less expensive and became portable and things started to get a little murky. People started bringing business computers home, and they would occasionally do personal things on them. And sometimes people even brought their personal computers into work to have more persistent access to their personal information space.
Then, of course, phones became computers and everything went to hell. Er, heck. Can you say hell in a corporate blog entry? Whatever. Anyway, phones are an interesting case because (a) they’re very personal devices (phone display as mating ritual, anyone?) and (b) it’s hard to carry around more than one at a time (people have, after all, a limited number of pockets to stash them in).
When smartphones were more expensive and less desirable (I’m looking at you, Blackberry and Windows Mobile), they were primarily provided by companies who exerted control over their configuration and allowed (or put up with) personal use of them. People put up with corporate control over the devices because they were corporate devices.
Things got a lot murkier with the introduction of the iPhone and later Android and webOS phones. Suddenly people were buying their own smartphones and wanting to use those personal phones for business. That desire presents businesses with a bit of a quandary.
On the one hand, hooray for people wanting to spend their own money and time on business tasks. On the other hand, these smartphones are still pretty dumb when it comes to separating personal and business use. None of the smartphone platforms support a notion of syncing personal and business information separately. You can sync your personal contacts with your business server or your business contacts with your personal server, but some data is going to end up where you (or your company) may not want it.
And then we get into stickier territory. Businesses want to protect their data, and if their data is on your phone then they want some measure of control over your phone. For one thing, they’d like password protection on the device, since phones aren’t yet smart enough to separate personal and business information. And furthermore, they’d really like the ability to remotely wipe your phone. Whenever they decide it’s necessary. That won’t be a problem, will it?
I personally think that explicitly recognizing that people use their phones for both personal and business purposes and providing support for that behavior at the platform level would really provide a phone OS provider an opportunity to differentiate their offering. Unfortunately, none of the providers seem to be stepping up to bat on this one. Apple is most concerned about the consumer market; business concerns are a secondary priority. RIM is the reverse: business first, personal use secondary. And Google? Well, they really want all of your data, period, so they’d rather you avoid thinking who can see and do what altogether. Hey, have you seen our animated phone screen backgrounds?
There is some work around the edges of the problem. HTC introduced the notion of Scenes for its Sense UI, with separate scenes possible for Work, Home, Weekend, etc. But that’s unfortunately as far as many companies have gone. So in the meantime, that’s a nice phone you have there. Can we have the ability to erase it?
If I had only two lessons to impart to people building mobile user experiences, it would be these:
-
A smartphone is not a small desktop (or laptop).
You’d think that’d be fairly obvious, and yet a lot of mobile applications with desktop equivalents seem to be designed just as stripped down versions of the desktop experiences.
Take email as an ubiquitous example. Most mobile email clients are designed as smaller versions of desktop email clients: inboxes, folders, read/unread flags, etc. But if you look at how people actually handle mobile email, the usage patterns are different. Mobile email users focus on triaging mail by (a) identifying what’s new (which isn’t necessarily the same as what’s unread), (b) figuring out what they can delete right away, (c) determining what they have to handle immediately because it’s time critical, and (d) deferring everything else until they reach a desktop or laptop. And despite that different mobile focus, mobile email clients are designed assuming you’re reading and responding to messages just like on the desktop.
This point ties back to my previous point about feature selection: don’t assume your mobile users will interact in the same ways as your desktop users. Figure out what they’re really going to do and support that, even if the feature set and UI need to be different.
-
Users’ activities will span devices.
Continuing that email example. When users defer handling messages on their mobile devices, they want to resume handling them on their desktops and laptops. Why is re-marking messages as unread the way most users end up handling that functionality?
Another example: users who employ their mobile phones to do price comparisons will nearly always defer making a purchase from an online supplier until they reach a desktop or laptop (why is an interesting question; I suspect it relates both to the perceived time of completing the transaction on a smartphone and to a concern about missing an important detail on the smaller screen). Despite that common pattern, the closest I’ve seen to a mobile application that helps make that transition to the desktop is Amazon’s iPhone app that lets you save items to your wishlist. And even then it’s up to the user to remember that they added the item and to complete the purchase.
So there you have it, my two lessons everyone should know. Don’t design your mobile apps assuming they should be smaller versions of your desktop apps, and don’t assume the mobile app will work in isolation. Go forth and build great stuff.
Apple’s announcement of the iPad last week has me thinking a bit about Microsoft’s Tablet PC. I still have a soft spot for Microsoft’s original tablet vision, and I still fondly remember the HP tc1100 I used for awhile when I was still a professor at Georgia Tech (although the thing was admittedly woefully underpowered even when new).
Given that Microsoft’s Tablet PC notion dates back to 2001, why is Apple the one generating so much buzz around tablets right now? I think a large part of the reason is that, despite Microsoft paying tablets lip service and dedicating some resources to it, they never really seemed to take the platform that seriously. Rather than a full-fledged tablet effort, their tablet work always seemed like an afterthought to the Windows operating system. It was a feature, not a platform with its own particular interaction paradigm. The stylus was really just a replacement for the use of the mouse and keyboard. The bright side was that Windows apps worked with Microsoft’s tablets “out of the box”, without any additional effort by developers. But that was arguably the downside as well: since developers didn’t have to modify their applications to work on tablets, they didn’t bother to. All apps worked on tablets, but none of them worked particularly well on tablets or really took advantage of their strengths (with a few notable exceptions, like Alias’ Sketchbook Pro, which I thought was a great example of an application designed to take advantage of a tablet’s capabilities).
So despite the kvetching around Apple’s iPad going with the iPhone OS rather than OS X, it’s arguably an advantage because developers won’t be able to get away with minimal modifications to desktop OS X applications that won’t really leverage the tablet platform. Instead they’ll try to get away with minimal modification to iPhone applications, but at least those are already designed around touch interaction.
Of course, only time will tell if Apple made the right choice with the iPad. But unfortunately after 9 or so years the evidence certainly suggests that Microsoft made some poor choices its tablet.