Gesture Stove

Project Overview

The goals of this project were to designing a simple gesture controlled app for iPhone, tailored to the needs of your user.

This project had myself asking questions like:

• How do I utilize a gesture that makes sense?
• How does it feel to do the gesture?
• Have you considered error prevention and error recovery?
• Is the gesture too big or too small?
• How is the feedback represented in the UI?
• How does the user know the interface read their gesture correctly?

By the end, I had an understanding of what goes into creating a gestural interface.
Timeline
September - October, Fall 2020 (4 weeks)
Skills and Tools
Figma, Adobe Premier, Vectary (3D modeling), digital hand sketching, Primary/ Secondary research

Demo

Here I’ve included a narrated video for the final Gestural Stove interface.

I feel very good about my gestural stove. I’m not sure it’s ready to move to the market but I have a really solid MVP which is how any great start-up begins!

I think the part of my project that worked really well was the ease of use. The gestures are easy but even easy gestures can become difficult to use if they need to be done in specific orders. I feel like the order of operations for each task is easy to go through once you have followed the onboarding.

If I could improve my gesture stove, I would come back to the graphics of the UI. Although very simple and easy to read I’d like to go with a more stylistic approach so that the UI also matches the physical hardware of a stove.

Problem

What would happen if we extended gestural interfaces into kitchen appliances?

- What is considered a “good” gesture?
- How can people benefit from a gestural interface?- How can you apply what people already know to make onboarding easier?
- Who would use this and when?
- What are the ramifications to a gesture-only stove?

Research

Before designing anything I started with trying to understand gestural interfaces through user tested interviews.

Similar to a card sorting excersize, I provided interviewees with an interface and asked how they would control that touch-interface using only gestures. This will give me an idea of how people translate motion into a command.

Test Interface #1: Camera Rotation

I want to know how someone would instinctively rotate a camera and take a picture.

Interview 1

Used full arm motion to rotate camera

Imagined a camera in the air and taking a picture with one finger

Interview 2

One hand out then twist at wrist for rotation.

One hand snapshot

Felt using both arms to rotate the camera was too cumbersome. also did not want to have to use 2 hands because the task didn’t seem big enough for 2 hands.

Interview 3

Hands parallel

Hands rotate

in the final position pointer finger is then doing a push motion

Pointed out that instead of using the hand out for single-handed gesture (from #2) a pinch and rotate movement may be better.

Test Interface #2: Scrolling and Selecting

I want to know how someone would instinctively scroll on a page and select a specific element.

Interview 1

Imagined augmented reality

trusted the computer would recognize where he wanted to go and gestured a button push.

Interview 2

Wanted a Wii or LG TV remoote experience. a point and click approach

pointing a finger and moving around the screen then pressing forward to indicate a button push

Interview 3

felt a drag of the finger like it was a cursor (again similar to #2)

Not fully sure about how to click or press. thought maybe “then just push?”

Thought it would be easier to use a swiping motion expecting the UI to indicate a selected button then pushing for select

Test Interface #3: Horizontal Scrubbing & Menu

I want to know how someone would instinctively bring up a menu, perform an action/ setting change, and return to the content.

Interview 1

Thought to bring up the menu by starting hand in a lower position

Swiping hand upwards should bring up the menu for pause/play forward/backward

Fist indicating a grab of the grabable video slider. this gesture would be specific to this task

sliding over the hand would move the slider and stopping will place the indicator in the desired spot for the video.

suggests a fist and opening fingers for selecting

Interview 2

does not want to have to open a menu in order to move the slider forward. suggests making a fist then sliding it over should make this task happen a fist and opening fingers for selecting

Felt the gestures for opening the menu seemed like too much of an extra step.

Interview 3

Wanted to pinch the bar then slide to move the indicator. similar to the others with a fist but with just fingers instead.

grabbing with fist felt like too big of an action. could be confused for selecting. wasn’t unique enough for slider motion considering other buttons on the app.

Ideation

After analyzing the interviews I began sketching out ideas of potential use cases for a gestural interface.

I created sketches based on the painpoints identified during the user interviews to help inspire a gestural solution. By asking what people do at home on a regular basis I saw enough patterns to inform some ideas.

This first sketch identifies a moment where using smart lights in the home requires an extra accesory or pulling out your phone to turn on the lights.

The potential use for a gestural interface uses the accesory but rather than using buttons it looks for gestures. This would allow the control to stay clean from outside germs and is faster than pulling out your mobile device and accessing an app.

When using zoom, unmuting yourself can sometimes seem like a hassle when you can’t find your mouse or pressing space bar isn’t an option when another window is open.

Incorporating gestures specific to zoom could make it easier to access certain settings more quickly for a seamless zoom experience.

Using a stove isn’t so bad but it can get messy the bigger the meal is. With time, cross contaminating the food with the hardware makes for a stick stove.

Even with messy hands, gesture controls could help keep the stove clean. Perhaps gestures could also allow for quicker setting changes.

I revisted with one of my interviewees for a follow up interview and furthured my investigation about their daily habits. Once I confirmed their biggest concern was in the kitchen I decided to move forward with designing the Gestural Stove.

Process

User Stories

The purpose of honing in one 3 top user stories is so that I can remain focused in the solution space during ideation.

As someone who cooks from home everyday, I want to comfortably be able to adjust the settings on the stove so that I can avoid hurting myself when I cook .

As who cooks with utensils, I need something that responds with only one hand so I can keep cooking with the other.

As someone who cooks everyday, I want to keep my hands working and flow through the movements of cooking more fluidly.

User Scenario Sketches

This sketch is describing a situation where handles located at the rear end of the stove actually make it harder to access knobs. Although manageable it is still a paintpoint for the person cooking.

In this sketch the cook has to reach over the steaming pots to change the settings on the stove. This is uncomfortable for the user and could potentially cause burns. This flaw is specific to stoves with this knob design, but is one of the most common stove designs.

Bonus Sketch

In the moment of sketching I had a couple ideas on how I thought the Interface should look so I jotted those down for later.

UI Solution Sketching

This sketch explores what it could look like to select a specific burner and set a timer. Rather than sketching the exact flow I wanted to get a gist of the experience.

Panel 1 is describing a burner that is already on. This is indicated by the orange ring and the number under the specific burner. Panel 2 depicts all the burners being slected at one time then panel 3 is confirming the correct gesture. 4-5 shows the interface for a burner that is cooling down after you turned it off.

Panels 1-4 depict choosing a specific burner and selecting it to adjust the temperature. 5-6 shows how the interface depicts turning on a burner.

Similar to the last sketch, panels 1-4 depict selecting a burner then 5-6 shows what it looks like when you gesture to turn off an individual burner.

Taskflow Diagrams

Working on these taskflow diagrams were great in preparing for pottential errors people could run into. This diagram shows the flow for turning on/ off a single burner.

Working on these inspired me to create a “stop all” feature. something that could work similarly to “ctrl+ alt + del”.  Here I created an “all-off” feature and setting a timer!

Wireframes & Picture Explanation

Turning On a Single Burner

Keeping the wireframes simple and focusing on the gestures themselves makes sure I am concentrating on task at hand. This shows the perspective of the user using the gestures to turn on a burner.

All Burners Off

Here I am addressing the “all-off” feature. I am playing with the idea here that a cursor apears in the UI when it notices your hand to indicate it is looking for a command.

Setting Timers

This scheme shows how the user would set a timer for a specific burner. I felt it was necessary to be able to set multiple timers. This way each dish can be easily managed.

After creating the first iteration of the Gestural Interface I went ahead and moved the design forward by adding colors and enhancing the feedback for each completed gesture.

Gesture Stove Onboarding Prototype

At this point I was able to construct an onboarding video using the prototype I created in Figma to explain the Gesture Stove experience.

This video was a great checkpoint for myself so I can see an overview and understand the feel of the gesture stove. This reflects the UI with added colors and enhanced feedback for each completed gesture. This artifact could also be referred to as a value proposition for my concept.

After completing an overview/ onboarding of the gesture stove, it was time to see how it’d look working in it’s intended environment.

Gesture Stove Prototyping in Action

I rendered a space with a stove using the 3D design tool Vectary. Using Adobe After Effects I superimposed my user interface onto my 3D environment and created prototypes of my design in action!

Final Design Rendered in a 3D Space

Turn On Burner
Choosing a burner to turn on is essential to using a stove! That’s why this is the first prototype we see. This covers; moving your hand into view, choosing a burner, selecting a burner, and turning it on. Burners that are turned on show up in red and indicate the level of heat they were set to.

Turn Off Burner
Turning off a burner shouldn’t be hard, in fact it needs to be easy. This is why it repeats the same actions as turning on a burner. Repetition is making it easier for users to remember how to use the gestures.

Setting a Timer
Setting a timer should be quickly accessed because cooking starts right when the food hits the pan. This gesture uses the different pads on your fingers in order to change the minutes or seconds for a timer. As always pushing forward will set the gestured command into motion.

Timer Off
In order to maintain the continuity of off/ cancel/ or back this gesture utilizes the shaking fist motion to turn of a timer. The order of operations determines how shaking a fist would work.

All Off
In an all-off situation like the end of cooking, or an emergency response, a simple gesture is required. Similar to how we are used to tapping the home button on our phones, the all-off or quit gesture was intended to feel the same.

Reflection

I feel very good about my gestural stove. I’m not sure it’s ready to move to the market but I have a really solid MVP which is how any great start-up begins.

I think the part of my project that worked really well was the ease of use. The gestures are easy but even easy gestures can become difficult to use if they need to be done in specific orders. I feel like the order of operations for each task is easy to go through once you have followed the onboarding.

If I could improve my gesture stove, I would come back to the graphics of the UI. Although very simple and easy to read I’d like to go with a more stylistic approach so that the UI also matches the physical hardware of a stove. Another point to address is the location of the Interface itself. One of the issues of cooking is having pots that are too tall, or pots that produce a lot of steam. In order to make sure the gestures can be properly read choosing a different location for the interface would make sure the user can use the stove properly.