Zero UI: Designing for Screenless Interactions

We are starting to live our daily lives in the post screen world. The advent of smart and contextually aware devices have changed how we interact with content. Data is a snapshot of our contextual connectivity to our physical and digital environments. Designers will need to build screen-less experiences that leverage data and algorithms to create value for users.

By definition, Zero UI removes the traditional graphic user interface (GUI) from the equation and uses nature interactions of haptic feedback, context aware, ambient, gestures and voice recognition.

The best interface is no interface.

– Golden Krishna

Haptic Feedback

Haptics (or kinesthetic communication) provides the user with motion or vibration-based feedback. Sega’s Moto-Cross video incorporated haptic feedback through the controller to add to the experience of colliding with other players on screen. Nintendo brought haptics to home consoles with their Rumble Pak accessory. Today, we commonly experience haptics as we interact with small touch screens. Haptics are currently used by fitness trackers and smartwatches to provide notifications to the wearer. Haptic feedback is being extended to clothing, while ultrasound or ultrahaptics will bring the sense of touch to virtual reality (VR) and Kinect style gaming. Keeping haptics to a minimum and avoiding overuse enhances the value of the feedback for users.

Context Aware

Devices and apps that are contextually aware and remove the need for additional interactions simplify digital and physical experiences by anticipating user wants. Personalization allows Zero UI devices and applications to be preemptive, predictive, and proactive. Domino’s Zero Click App works on the premise that if you launch the app, you want pizza delivered. The only way not to have pizza delivered is to close the app within ten seconds.

Context of use is being integrated into devices, lowering the overall need to interact with the app or device settings to achieve user wants. At first glance, the new Apple AirPods are just wireless earphones. However, they offer relevant and customized experiences to the user. Remove one AirPod, and music playback switches to mono. Remove both, and playback stops and resumes when the user puts them back in.

Other devices, like the Nest Thermostat, rely on collecting data on usage and interactions to adapt to the user’s anticipated needs. By leveraging sensors within a device or location data, we can design contextual experiences that become implicit rather than explicit interactions. What is the situation or context the user is in? What is the context of use of the device? The need is to build and design interactions that are in the background and temporal.

The real problem with the interface is that it is an interface. Interfaces get in the way.

– Don Norman

Glanceability and Ambience

Ambient devices work on the principle of glanceability, with no need to open applications or read notifications. One glance should provide the user with the needed information or context, much like a wall clock or a single day calendar. The Nabaztag Rabbit was a companion style device that provided glanceable experiences by combining colored lights and changing positions of the rabbit’s ears. While the Chumby provided a snapshot of information through widgets. The underlying premise of ambient devices is to create a seamless bridge between physical and digital spaces. These interactions are connected, browserless experiences. Both of these devices were short-lived as they were not able to adapt to changing technologies and the information needs of users.

Gesture Based Interactions

Gestures need to be easily learned and repeatable. One of the problems with gestures is getting systems to recognize the physicality of the actions (wave, hover and swipe) in the three dimensional space. System responsiveness to gesture inputs can create problems as users start going through multiple gesture sets trying to get the system to respond. There needs to be a balance between teaching the gestures and providing feedback to the user on successfully completing actions and tasks. Designing for gestures requires being adaptive and working within the limitations of the system (processing speed) and the focal length of the camera and establishing a minimum and maximum distance from the screen/camera for the interactions to occur. Google’s Project Soli makes “your hands the only interface you need” by using radar to detect fine movements. UI layers and interactions should not overextend the user’s motion away from the body as this can be fatiguing. Make gestures small and natural.

Voice Recognition

Voice recognition has been a part of science fiction since the early 1970’s and entered the mainstream with the advent of voice recognition and search in both iOS and Android phones. Users will accentuate syllables, words and phrases, thinking that this will allow the system to recognize their queries and commands. Designing the conversational UI requires additional user research to find out how users will phrase and construct their queries or statements. One of the difficult parts of voice recognition is the need to adapt the system to regional dialects and slang. Something as simple as ordering a pizza may become a complex design problem when all the different variants for ordering are considered. Understanding user wants and intent, or motivation, helps determine the phrasing or expression.

The Amazon Echo limits interactions to ambient or background conversations by requiring an initiation phrase with each command and query.

Conclusion

The future of Zero UI is in leveraging data and understanding user intent to design and build personalized user experiences that are relevant and anticipate user needs. Contextual devices create experiences that extend beyond the screen and connect our digital and physical worlds. Minimizing dashboards and providing information that is glanceable and creates value for users adds to the challenge of designing for screenless interactions. User research will extend beyond how we interact with an interface to how we live and use physical objects in our daily lives.

Save