Omni Automation approached our professor with an opportunity to help them with their home control experience. Their initial prototype consisted of a rough wall interface that people could interact with. With their initial prototype in mind, we sought to expand on their home control functionalities to include a voice interface and a mobile application. In partnership with my cohorts Kara Bougher and Sahithi Muvva, we executed on delivering a prototype of this system. So, how did we do this?
Omni is an integrated smart home automation platform that programs itself. Its network of devices works together to recognize household members’ patterns and uses them to create a seamless smart home experience. This was absolute new and exciting territory for us. We needed to approach this by knowing fully what the outcomes of this project was. We established key goals and questions to answer:
Understand how users might effectively interact with the Omni smart home system via voice and mobile touch screen interfaces
Gauge user sentiment toward the privacy and security of such a system
Determine whether voice is a useful method of interaction, and if so, in which use cases
Provide designs and recommendations to the company
Our final outcome would be designs and recommendations to Omni.
We knew little at the time about this space, other than our own experiences with smart technology and voice UI. We conducted a first round Cognitive Walkthrough with Omni Automation’s initial prototype to gauge what with their current product we needed to work with. The initial prototype consisted of a basic wall mobile-style interface, with computer generated interactions to simulate connected items.
What we found with their prototype concerned us. While the basic functionality of a smart home wall interface existed, it wasn’t very easy to use with manual interaction. From a voice standpoint, it lacked at the time easy methods for interacting from a setup standpoint. The voice also required memorization of yet another smart home language.
So from knowing what we were dealing with, we set out to do some field research.
My cohorts and I delved into understanding the product space via a literature review. We ran a competitive analysis to see what products out there would Omni compete with, and what Omni can work with. We found multiple competitors that had very interesting smart technology, but most relied on general niches like lights, required configuration with only other smart technology, and generally operated on a schedule.
We realized Omni had potential to take a place amongst all these competitors, and could even do more: learn.
We then conducted Contextual Interviews in our target audiences’ homes to gather detailed qualitative data about our users. We needed to understand what are people’s expectations of a smart home, how they may interact with such tools in their own homes, what their sentiments about it were and why. We used affinity diagramming and thematic analysis as our ways of dissecting the data and identifying the patterns in the data. We crafted 3 key personas that were representative of our users and a Day-In-a-Life model to further understand our users day to day.
We used affinity diagramming and thematic analysis as our ways of dissecting the data and identifying the patterns in the data. We crafted 3 key personas that were representative of our users and a Day-In-a-Life model to further understand our users day to day. For the constraints of our project, we focused on 2 of the 3 personas we crafted: Laura and Allison
We identified three key user groups and denoted them down via personas.
With the increasing usage of smart technologies, our product should not interfere with our users’ daily lives. It should consider how it can be a part of it, not a hindrance.
Omni at first would have introduced its own smart voice interface...which would require more learning upon people to get used to the program. Talking with various folk from each group, we found that people wouldn’t like to learn something unnatural in how they may speak, and would require relearning something already known.
As you see with our personas, there are varying levels of experience with smart technology. We saw with a wider net of our user base that having a product that does not account for the non-experienced would be a mistake. In its form at the time, Omni would not mesh well with users like Laura or Allison, and may even have caused headaches for Johnathan.
Every person we spoke to had varying levels of concern for privacy, but all unanimously declared security number one and personalization number two with smart technology. The system we would come up with needed to have ways out of things that a person does not want to be in.
With these conclusions in mind, Sahiti, Kara, and I met for a few brainstorming sessions to come up with ideas on what this interface would be. With our collective experience, we thought bigger and outside the box of what could our product do and affect. What is this new idea? What could Omni do for Laura and Allison? How would they use it in their daily lives?
Well, they wouldn’t if it wasn’t in something that they knew.
Allison and Laura already have busy lives, and while Allison may be alright with using something new, expense is something she must consider. The key point of the selected idea is that it would be built on top of products that already existed. It would just make their current lives easier and fit into their current values.
Additionally, there are already very powerful experiences with voice control that is easily user accessible with Google Home and Alexa. Why craft another voice system when people are already getting used to just these two (they’re not the only systems with voice control, but these are very prime examples).
Doing design exercises on this type of user interface was quite an interesting beast, as how might one iterate on an interface that was partly non-tangible? Iterating on this design took more than just screen work; it took a ton of writing. We met for multiple brainstorming sessions and iterated on various talk tracks for the voice UI, focusing on key scenarios for the product. We also iterated on how this might function within a visual interface experience.
Through our research and with the insights from our product partners we focused on designing just a few scenarios. These would be enough to gather the information needed to answer our initial driving questions.
We did some very light low fidelity mobile screens that focused on strictly feedback and notifying a user of voice interactions needed for Omni. We wanted to first focus on creating an interface with voice as the primary. Notice in the screens shown the images are just light alerts and some voice feedback.
So, before we continued in design we halted to do some testing of what we had. Using our script and this very lo-fi screen prototype, we did some rapid usability testing sessions for some early feedback on our general direction. To do our testing, we gave users a couple of the scenarios we defined and had them speak to our "system" (in this case, our computer). We would play back a message based on their response and asked for their feedback.
As we dug into the feedback we received, one key facet just kept coming up and up again during the few sessions we had: everyone questioned whether or not it really worked. It would be nice to have something to confirm the changes made by Omni.
We iterated on our designs. For the scripts, we used the findings we learned to consolidate down and remove some parts, making communicating with Omni a little more natural and less encumbersome.
I took lead for the mobile interface we crafted. For these designs I needed to address the needs of both Allison and Laura, Allison being a little more tech-trend savvy and Laura being completely new to a smart system. Clarity of the system's status and direct mirroring of what the voice UI would say is what I focused on. Making the correlation and getting confirmation was important to remove the anxiety of configuring the system with voice. Each icon was also carefully chosen with these concerns in mind. I chose darker colors for ease on the eyes, blue for a secure feel.
Now, to showcase something like this as a testable high fidelity prototype we needed to get creative. Crafting a mobile application seemed straightforward, but what about voice?
We used the Wizard of Oz method. We crafted a semi functional prototype that made use of Invision for the mobile application and the application Invocable to simulate a voice experience through the Alexa smart device. For testing, users could interact via Alexa and receive connected, yet scripted responses to simulate Alexa working hand-and-hand with Omni.
We were ready to confirm the next round of questions of this prototype, and see how it would perform with users.
Does the system match users’ expectations?
Where are the friction points in the voice and mobile flows?
Do users understand the responses given by the voice interface?
How natural is the conversation between the user and the system?
Does the user have difficulties understanding the voice interface?
What are the benefits of using the voice interface vs. the mobile application for tasks?
We tested this prototype in Formal Usability Testing sessions using the key scenarios selected with users to further evaluate it and answer our top-level questions we sought to understand initially. We used the same scenarios worked out with our stakeholders. For the testing, we had our users complete a series of four tasks, each with two subtasks (one starting with the mobile app, and one starting with voice). We alternated the order of the sub-tasks with each participant, so that half of them used mobile first and then voice, and the other half used voice first and then mobile.
Users were able to navigate through the application, and overall, it met their expectations of an application for smart home control. We were on the right track for the mobile application! Only changes necessary were visual in nature for accessibility.
It was a challenge for users to complete complex tasks with voice, as it was more difficult to manage multiple commands without a visual representation and confirmation. In general, people were more comfortable using the mobile app than the voice interface.
Users were able to understand the voice interface. Participants had a 100% task completion rate and found the tasks relatively easy.
Users did not have trouble interpreting what the voice interface was saying, however, they did experience some frustration when they had to repeat something they had already said in order to accomplish a task. This goes along well with previous research we conducted.
When observing users switching from mobile to voice during tasks, we arrived at mixed conclusions. We observed hesitation in users’ answers but did not get any new information when we asked additional probing questions. One of the participants stated, “I think I expect [the interface] to stay in one state. I expect it to be all mobile or all voice.”
The majority of the participants were interested in and saw value in Omni’s machine learning capabilities, but also expressed some apprehension about the system observing and learning their daily routines, and about whether or not that information would be secure.
After wrapping up the testing, I tweaked the designs to account for some visual assets users accounted for in the mobile app (colors too dark, accessiblity concerns). We passed recommendations over to the Omni team, along with all design assets and prototype.
For voice, we recommended to streamline any complex tasks, possibly consider these tasks to be only in the mobile app, and consider giving even more visual confirmation in the mobile app.
We also recommended continued research in mobile and voice transitions, voice UI in general, machine learning, and further user testing.