Source: Flickr user vasile23
Nearly three-fourths of iPhone 4S owners are satisfied with Siri, according to new figures from Parks Associates, but they’re not using it for many things. While about one-third of those polled say they use Siri regularly to make calls, send texts or look up information, the voice recognition software is far less popular when it comes to things like playing music or scheduling meetings. And Parks Associates reported remarkably mixed reviews for Siri, with some users complaining about technical flaws like its difficulty in understanding accents.
That doesn’t come as shocking news to those of us who’ve followed voice recognition technology, which has long failed to live up to the hype. Apple only added to that buzz with its commercials for Siri, which portray a flawless technology that does everything from surfing the web to writing emails without a hiccup. While I think Siri performs well enough for some basic tasks, though, I’ve found it very difficult to use to searching for information online or performing other more complicated things. Perhaps it’s no surprise, then, that one user is suing Apple for falsely advertising Siri’s functionality.
Speech recognition technology: No panacea
But the biggest problem with Siri – indeed, the biggest problem for voice recognition software in general – is that it’s often billed as a panacea. Tired of scrolling through screen after screen on your phone, or messing with your car radio dials as you’re cruising down the highway, or typing on a tiny keyboard? Use your voice!
As Parks Associates’ data suggests, however, voice navigation on smartphones is limited by use cases. It’s ideal for messaging or finding directions while behind the wheel, but it isn’t an option during meetings, in noisy surroundings or when your kid is lying next to you asleep. It’s also less than ideal on the subway or while watching TV with friends and family.
The emerging interface race
And speech is only one of many user interface technologies that we’ll see emerge over the next several years. Texas Instruments is developing phones that will recognize gestures, just as Microsoft’s Kinect gaming console does. Software from a startup called Senseye tracks users’ eye movements through a smartphone’s front-facing camera to enable them to control their phones with their eyes. IBM predicts that within five years we’ll be able to control our phones with our minds via wearable technology (in a hat, for instance) that would transmit thoughts to our handsets.
Apple changed the user interface game with its iPhone, which eschewed the QWERTY keyboard in favor of a touchscreen that was easy and intuitive to navigate. Unlike that touchscreen, though, each of the next-generation technologies has shortcomings that will prevent any of them from emerging soon as a dominant interface for our smartphones (or other devices, for that matter).
Instead, we will choose specific interfaces based on what we’re trying to accomplish and where we’re trying to accomplish it. (We might use gestures for gaming, for instance, but it’s a poor fit for most other activities – especially in public places.) And those technologies will eventually be integrated – a voice-driven search might present several on-screen options that could be clicked to access more information.
We’re entering a fascinating era of user interfaces, and we’ll increasingly see applications that are built to leverage specific interface technologies. And consumers will slowly grow accustomed to choosing the right app and interface based on their objectives and surroundings. But no interface on the horizon is positioned to emerge as a dominant method for interacting with our telephones. That includes voice recognition software – despite all the hype that Siri enjoys.