Audio Interface Design: Why R2-D2 Couldn’t Talk
Growing up an avid Star Wars fan, it was always my dream to have a robot like R2-D2. In case you aren’t familiar, R2-D2 is the trashcan looking robot who is one of the main characters in the Star Wars trilogy (he’s in the new ones too, but I think it’s best for everyone that we pretend those didn’t happen).
One aspect of R2-D2 that always mystified me was that he couldn’t talk. C-3PO, his robot companion, could speak 6-million languages including excellent English with a hint of a British accent and all R2-D2 could muster were beeps and whistles. Clearly the technology was there to let him speak, but whoever designed him intentionally decided to stick with the beeps and whistles.
Now I know this is a fictional movie, but this got me thinking: is there a reason we don’t want our robots to talk? In the world of audio user experience, there has been a lot of ongoing research into non-verbal audio user feedback (earcons). One of the barriers to making this technology a reality is justifying using non-verbal sounds over natural language feedback (speech). I believe I have found that justification in R2-D2.
In the Star Wars universe, R2-D2 is what is referred to as an astromech droid. An astromech droid’s primary function is navigating spaceships. Basically, this makes R2-D2 a GPS on wheels. Even our primitive GPS devices can talk (complete with the British accent). Anyone who’s used these contraptions knows how annoying that voice can become during a long car ride. Just imagine what it would be like on a trip between solar systems.
With something as simple as directions, non-verbal audio cues could communicate all the information you need; a beep for right and a whistle for left. The advantage here is that the messages are short and the severity or urgency of the turn can be communicated through tone or duration of the sound. This would reduce the intrusive nature of the sound and enhance the user experience.
Of course, R2-D2 could do a lot more than just navigate. He could store and share messages and files (including Death Star plans), interface with other computers, and come up with solutions to everyday problems (e.g. getting trapped in a garbage compactor). In many ways he was more like a smart phone on wheels. Here the value of the beeps and whistles is even more salient. If your phone could communicate with you through non-verbal sounds, it could tell you something without the people around you getting the message as well.
Here we see a clear application of non-verbal audio feedback. With modern digital signal processing, your mobile device could create unique sounds to represent each of your contacts. Features of the sound bite could be altered to communicate the type of message (e.g. email, text, call) and the urgency of the message. Based on short audio cues, users would be able to identify the source and nature of the message they received without people in their environment knowing.
Perhaps most important to remember is that C-3PO’s ability to talk made him the brunt of many jokes (in many ways he was the Jar-Jar-Binks of the original trilogy). The other characters rarely welcomed his annoying banter. R2-D2 on the other hand was considered a trustworthy, hardworking part of the team. Maybe this is because we don’t really want to have a conversation with our devices. I don’t want my fridge to talk back to me and I don’t see any need to have a chat with my phone. I want these devices to do what I say and only share the information that is important in the most concise manner possible.
As the technology to allow computers to talk becomes a reality, it is important for us to consider what we want them to say and how we want them to say it. A world where everything with a microprocessor is talking like C3PO is going to get pretty noisy and extremely annoying. For more on the application of audio in mobile interface design, check out my white paper, 7 Reasons Audio Will Save Mobile User Interface Design.