Omni Automation

Designing Voice and Touch Screen Interfaces for a Smart Home System

Omni Automation approached our professor with an opportunity to help them with their home control experience. Their initial prototype consisted of a rough wall interface. With their initial prototype in mind, we sought to expand on their home control functionalities to include a voice interface and a mobile application. In partnership with my cohorts Kara Bougher and Sahithi Muvva, we executed on delivering a voice and mobile prototype of this system.

Role: Research, Interaction Design, Visual Design, Voice Design, Design Strategy, Design Mentor

Project 2

"How might we...

Understand how users might effectively interact with the Omni smart home system via voice and mobile touch screen interfaces

Gauge user sentiment toward the privacy and security of such a system

Determine whether voice is a useful method of interaction, and if so, in which use cases

Provide designs and recommendations to the company "



Initial Research

To create such deliverables required deep understanding of the smart technology and voice space. We conducted a first round Cognitive Walkthrough with their initial prototype to gauge where their current product was at.

My cohorts and I delved into understanding the product space via a literature review. We ran a competitive analysis to see what products out there would Omni compete with, and what Omni can work with. We also conducted Contextual Interviews in our target audiences’ homes to gather detailed qualitative data about our users.

We used affinity diagramming and thematic analysis as our ways of dissecting the data and identifying the patterns in the data. We crafted 3 key personas that were representative of our users and a Day-In-a-Life model to further understand our users day to day. For the constraints of our project, we focused on 2 of the 3 personas we crafted: Laura and Allison

View the Personas!


Key Requirements

With the increasing usage of smart technologies, our product should not interfere with our users’ daily lives. It should consider how it can be a part of it, not a hindrance.

Voice control needs to be natural in order to minimize frustration levels.

Our solution needs to be easy to use, and customizable, in order to appeal to many different users with varying levels of experience with voice interfaces.

Our system must have ways to opt out anything that could pose a security risk. People want to avoid creepiness if it's too much.



Brainstorming the Interface

I really wanted to use this part of the process as a way to grow myself in providing design knowledge cohorts become more ready for industry work. I led a brainstorming session with my awesome cohorts to come up with ideas on what this interface would be. With industry product design experience, I helped nudge my cohorts into pushing outside the box and thinking bigger. I led in defining key brainstorming guides: How might we....

Design something that is simple to install
Something that is trustworthy
Is compatible with many devices
Has voice capability
Considers single family and apartment homes
Can use a wall interface

We came up with 3 distinct design iterations to choose from, and chose one.





The chosen concept was a primarily voice-controlled device that turns common non-smart home assets around the house into smart assets, and makes it possible to interact with these items through home assistants or smartphone virtual assistants the user already owns. These include Google Home, Alexa, Siri, and Cortana...our prototype used Alexa.





Low-Fi Design

We collaborated deeply on the scripts, focusing on the points of break, when the user would respond, what would they say, what would be the response from the system. It's very interesting thinking about this type of design, as there are nigh endless possiblities to think of for the right responses. Different cultures and languages were also a thing we considered. Through our research and with the insights from our product partners we focused on designing just a few scenarios. These would be enough to gather the information needed to answer our initial driving questions.

I did some very light low fidelity mobile screens that focused on strictly feedback and notifying a user of voice interactions needed for Omni. We wanted to first focus on creating an interface with voice as the primary. Notice in the screens shown the images are just light alerts and some voice feedback.

Guerilla Testing

Using our script and this very lo-fi screen prototype, we did some guerilla usability testing sessions for some early feedback on our general direction. To do our testing, we gave users a couple of the scenarios we defined and had them speak to our "system" (in this case, our computer). We would play back a message based on their response and asked for their feedback.

See the Script





After conducting our rapid testing, we quickly realized that an actual interface is extremely important with this type of system for confirmation, peace of mind, and to avoid cognitive overload. This is 100% important with our Laura and Allison personas.





Hi-Fi Design

We iterated on our designs. For the scripts, we used the findings we learned to consolidate down and remove some parts, making communicating with Omni a little more natural and less encumbersome.

I took lead for the mobile interface we crafted. For these designs I needed to address the needs of both Allison and Laura, Allison being a little more tech-trend savvy and Laura being completely new to a smart system. Clarity of the system's status and direct mirroring of what the voice UI would say is what I focused on. Making the correlation and getting confirmation was important to remove the anxiety of configuring the system with voice. Each icon was also carefully chosen with these concerns in mind. I chose darker colors for ease on the eyes, blue for a secure feel.



The Prototype

We crafted a semi functional prototype that made use of Invision for the mobile application and the application Invocable to simulate a voice experience through the Alexa smart device. We tested this prototype using the key scenarios selected with users to further evaluate it and answer our top-level questions we sought to understand initially. Using this prototype, we performed a formal Usability Evaluation using the scenarios we designed for.

Feelings

Our prototype performed well with the users, and is on the right track. The tasks were successfully completed. Adding the mobile interface in did as predicted; aided in alleviating tensions with our target users, and solidified their decisions. The interface itself performed well in terms of simplicity and ease of use, satisfying the need for our Laura and Allison personas.



Final Designs and Handoff

After wrapping up the testing, I tweaked the designs to account for some visual assets users accounted for in the mobile app (colors too dark, accessiblity concerns). We passed recommendations over to the Omni team, along with all design assets and prototype.

For voice, we recommended to streamline any complex tasks, possibly consider these tasks to be only in the mobile app, and consider giving even more visual confirmation in the mobile app.

We also recommended continued research in mobile and voice transitions, voice UI in general, machine learning, and further user testing.

All done here? Head back to the Portfolio