Talking to Your Self-Driving Car
Enabling people 65+ to regain their independence via emerging technology
Aaron Faucher, Ketki Jadhav, Zohaib Khan
Conversation Design, Video Directing & Editing
Adobe Premiere, After Effects
Designed a user experience for commanding an autonomous vehicle using conversation, not controls.
The senior population is expected to double by 2050 (Census)
More than 2 out of 3 people fear losing their independence in their old age (Telegraph)
Seniors can find it overwhelming to adapt to new technology
A driverless car helps elderly people get back their independence, and a CUI makes it easy for them to say what they want the car to do instead of learning a new technology and interface.
A Focus on Senior Needs
CONVERSATIONAL TONE The CUI converts the AV from an inaccessible, intimidating piece of technology to a friendly, responsive assistant. For example, the car will say "where are we going?" instead of "please set the destination".
VOLUME SENSITIVITY Many older users are hard-of-hearing, and the CUI contains an interaction to support this. It expects error states where the user may not be able to clearly hear the conversation, and updates the volume accordingly.
PREDICTIVE MEMORY ASSISTANCE Older users may benefit from the types of interconnectivity the car can provide with other devices. In our scenario, we predict an example where the car connects to a smart fridge to remind our user that she may want to pick up milk while she’s on the way to the grocery store.
SMART DROP-OFF / PICK-UP Many older users have limited mobility. The CUI will support this need by allowing the car to drop users closer to destination entrances and self-park, thus reducing walking distance for seniors.
We used earcons as a pre-attentive state. We predicted that most riders would be involved in heads down activities when riding in the AV. If the CUI began talking to the user immediately and without prompting, it would be too easy for the rider, especially one hard of hearing, to miss the first part of the utterance. We opted to use subtle but attention-grabbing earcons that would notify the user that the CUI was about to speak. Earcons played a similar role in providing auditory feedback when the user prompted the CUI, as well.
Volume increase earcon
Designing a CUI is very different from any other visual interface. Affordance, feedback, feedforward, everything that informs the user about how to use the system has to now be thought of as intangible, sound based interactions. The way to design this is using a few key interactions:
Recognition of utterances
Prediction of intention
Diagramming key interactions between the CUI and the user.
The full diagram can be viewed at goo.gl/DTAjnB
© Ketki Jadhav
Feature Analysis Matrix - CUIs
In an autonomous vehicle where the user is relieved of typical responsibility of driving the car, should the Conversational interface take on a less passive role? Potentially as entertainer, conversationalist, or a source of content?
In a CUI, not having visuals and an interface to play with unlike a traditional interface, makes it challenging for designers to design a solution. Scenarios solve this problem by allowing designers to immerse themselves fully in the situation to help imagine a natural dialogue flow.
Voice activation vs. Gesture activation
There is no natural conversation going on between the user. The CUI only repeats the user's commands.
How will the CUI know which entrance/exit she'll take? How does it know when her errand is complete – especially because she too longer buying more items?
Experience prototyping allows designers to show and test scenarios with users through active participation.
We got great feedback from conducting this exercise. Many of our interactions fell into the category of command-and-response; we were encouraged to think of ways to make the interaction more conversational. In response to this, we expanded our palette of possible interactions, thinking about other features or user intents that might lend themselves to more back-and-forth.