Thinking beyond audio: Augmenting headphones for everyday digital interactions

Published

By , Design Engineering Researcher

Microsoft at DIS 2023: Thinking beyond Audio: Augmenting headphones for everyday digital interactions

This research was accepted by and received a Best Paper Award during ACM Designing Interactive Systems (DIS) 2023 (opens in new tab), which is dedicated to advancing the field of user-centered system design.

Headphones are traditionally used to provide and manage audio experiences through physical controls and a range of sensors. Nonetheless, these controls and sensors have remained confined to audio input and output functionality, such as adjusting the volume or muting the microphone. Imagine if headphones could transcend their role as mere audio devices. 

Because headphones rank among the most popular wearables in the market, we have an exciting opportunity to expand their capabilities through integrating existing sensors with supplementary ones to enable a wide variety of experiences that go beyond traditional audio control. In our paper, “Beyond Audio: Towards a Design Space of Headphones as a Site for Interaction and Sensing,” we share a vision that explores this potential.

By using sensors such as microphones, proximity sensors, motion sensors, inertial measurement units (IMUs), and LiDARs, headphone designers can explore new avenues of input and interaction. The fact that headphones are worn on a person’s head allows for a wide range of applications, such as following head movements, body postures, and hand gestures. Furthermore, as wearable devices, headphones have the potential to provide wearers with context-rich information and enable more intuitive and immersive interactions with their devices and environment beyond traditional button-based controls.

Spotlight: Blog post

MedFuzz: Exploring the robustness of LLMs on medical challenge problems

Medfuzz tests LLMs by breaking benchmark assumptions, exposing vulnerabilities to bolster real-world accuracy.

Potential scenarios for sensor-enhanced headphones 

To explore this concept further, we propose augmenting headphones with additional sensors and input widgets. These include: 

  • IMUs to sense head orientation
  • Swappable sets of input controls  
  • A range-sensing LiDAR that enables the sensing of hand gestures

By incorporating these capabilities, we envision a wide range of applications where headphone input acts as a bridge between the person wearing it and their environment and enable more efficient and context-aware interactions among multiple devices and tasks. For example, a headphone could assist people with applications like video games or help manage interruptions during a video call.  

Let’s explore some scenarios to illustrate the potential of our headphone design concept. Consider a person engaged in a video call with teammates when they are suddenly interrupted by a colleague who approaches in person. In this situation, our headphones would be equipped to detect contextual cues, such as when the wearer rotates their head away from a video call, signaling a shift in attention. In response, the headphones could automatically blur the video feed and mute the microphone to protect the wearer’s privacy, as shown in Figure 1. This feature could also communicate to other participants that the wearer is temporarily engaged in another conversation or activity. When the wearer returns their attention to the call, the system removes the blur and reactivates the microphone.

Figure 1: Two videos side-by-side showing the headphones in a context-aware privacy-control scenario. On the left, there is an over-the-shoulder view of a wearer participating in a video call on a laptop. As he looks away from the call, the laptop screen changes color, and the application is muted, depicted by a mute icon overlayed on the video. As the wearer looks back at the screen, it becomes unblurred and a unmute icon is overlaid on the image, indicating the mute has been turned off. On the right, we see the laptop screen previously described.
Figure 1. These videos illustrate a context-aware privacy control system implemented during a video conference. In this scenario, the wearer temporarily disengages from the video conference to engage in an in-person conversation. After a predefined period, the system detects the wearer’s continued attention directed away from any known device, taking into account the environment context. As a result, privacy measures are triggered, including video blurring, microphone muting, and notifying other participants on the call. Once the wearer re-engages with the screen, their video and microphone settings return to normal, ensuring a seamless experience.

In another privacy-focused scenario, imagine a person simultaneously conversing with multiple teammates in separate video call channels. Our headphone design allows the wearer to control to whom their speech is directed by simply looking at their intended audience, as shown in Figure 2. This directed speech interaction can extend beyond video calls and be applied to other contexts, such as sending targeted voice commands to teammates in a multiplayer video game.

DIS 2023 - Figure 2: Two videos side-by-side showing the wearer controlling where his input is being sent among a multitude of devices. On the left, a video shows an over-the-shoulder view of a wearer interacting with a monitor and aptop while wearing headphones. There are two separate video calls on each screen. As the wearer turns from one screen to another, a large microphone icon appears on the screen at which the wearer is looking, and a muted microphone icon is shown on the other screen.

The video on the right shows an over-the-shoulder view of a wearer interacting with a laptop while wearing headphones. The laptop screen shows a video game and four circular icons on each corner depicting the other players. The user looks at the bottom left of the screen, which enlarges the icon of the teammate in that corner, and the wearer starts to speak. The wearer then looks at the top-right of the screen, and the teammate in that corner is highlighted while the wearer speaks.
Figure 2. Headphones track the wearer’s head pose, seamlessly facilitating the distribution of video and/or audio across multiple private chats. They effectively communicate the wearer’s availability to other participants, whether in a video conferencing scenario (left) or a gaming scenario (right).

In our paper, we also demonstrate how socially recognizable gestures can introduce new forms of audio-visual control instead of relying solely on on-screen controls. For example, wearers could interact with media through gestural actions, such as cupping their ear towards the audio source to increase the volume while simultaneously reducing ambient noise, as shown in Figure 3. These gestures, ingrained in social and cultural contexts, can serve as both control mechanisms and nonverbal communication signals.

DIS 2023 - Fig 3 - image showing gestural controls for volume
Figure 3. Top: Raising the earcup, a commonly used gesture to address in-person interruptions, mutes both the sound and the microphone to ensure privacy. Bottom: Cupping the earcup, a gesture indicating difficulty hearing, increases the system volume.

Additionally, we can estimate the wearer’s head gaze through the use of an IMU. When combined with the physical location of computing devices in the wearer’s vicinity, it opens up possibilities for seamless interactions across multiple devices. For instance, during a video call, the wearer can share the screen of the device they are actively focusing on. In this scenario, the wearer shifts their attention from an external monitor to a tablet device. Even though this tablet is not directly connected to the main laptop, our system smoothly transitions the screen sharing for the wearer’s audience in the video call, as shown in Figure 4.

DIS 2023 - Figure 4: Two videos side-by-side showing a headphone wearer among a multitude of devices controlling which screen is shared in a video call. The video on the left shows an over-the-shoulder view of a person interacting with three screens—a monitor, a laptop, and a tablet—while wearing headphones. A video call is in progress on the laptop, and the wearer is giving a presentation, which appears as a slide on the attached monitor. As the wearer turns from the laptop screen to the monitor, the presentation slide appears on the shared laptop screen. The video on the right shows an over-the-shoulder view of the person interacting with three screens—a monitor, a laptop, and a tablet—while wearing headphones. We see the wearer looking at the monitor with a presentation slide, which is mirrored on the laptop screen. He then turns from the monitor to the tablet, which has a drawing app open. As he does this, the drawing app appears on the shared laptop screen. The wearer uses a pen to draw on the tablet, and this is mirrored on the laptop. Finally, the wearer looks up from the tablet to the laptop, and the laptop screen switches to the video call view with the participants’ videos.
Figure 4. A wearer delivers a presentation using a video conferencing tool. As the wearer looks at different devices, the streamed video dynamically updates to display the relevant source to participants.

Finally, in our paper we also show the use of embodied interactions, where the wearer’s body movements serve to animate a digital representation of themselves, such as an avatar in a video call, as shown in Figure 5. This feature can also be implemented as a gameplay mechanism. Take a racing game for instance, where the wearer’s body movements could control the vehicle’s steering, shown on the left in Figure 6. To extend this capability, these movements could enable a wearer to peek around obstacles in any first-person game, enhancing the immersion and gameplay experience, shown on the right in Figure 6.

DIS 2023 - Figure 5: Two videos showing a headphone wearer controlling an avatar in a video call through head movements. The video on the left shows an over-the-shoulder view of a headphones wearer interacting with another participant on the call. The video on the right shows a wearer using a touch control to depict an emotion in his avatar.
Figure 5. Left: Headphones use an IMU to monitor and capture natural body movements, which are then translated into corresponding avatar movements. Right: Touch controls integrated into headphones enable wearers to evoke a range of emotions on the avatar, enhancing the user experience.
DIS 2023 - Figure 6: Two videos showing a wearer playing a video game while leaning left and right. These movements control his character’s movements, enabling him to duck and peek around walls.
Figure 6. Leaning while wearing the headphone (with an integrated IMU) has a direct impact on game play action. On the left, it results in swerving the car to the side, while on the right, in enables the player to duck behind a wall.

Design space for headphone interactions 

We define a design space for interactive headphones through an exploration of two distinct concepts, which we discuss in depth in our paper.

First, we look at the type of input gesture for the interaction, which we further classify into three categories. The gestural input from the wearer might fall under one or more of these categories, which we outline in more detail below and illustrate in Figure 7.

  • Touch-based gestures that involve tangible inputs on the headphones, such as buttons or knobs, requiring physical contact by the wearer
  • Mid-air gestures, which the wearer makes with their hands in close proximity to the headphones, detected through LiDAR technology
  • Head orientation, indicating the direction of the wearer’s attention
DIS 2023 - Figure 7: List of three stylized images showing the three main kinds of gestures we look at: touch, head orientation, and mid-air gestures.
Figure 7. Sensor-enhanced headphones can use touch-based gestures (left), head orientation (middle), or mid-air gestures (right) as types of input.

The second way that we define the design space is through the context within which the wearer executes the action. Here, design considerations for sensor-enhanced headphones go beyond user intentionality and observed motion. Context-awareness enables these headphones to understand the wearer’s activities, the applications they are engaged with, and the devices in their vicinity, as illustrated in Figure 8. This understanding enables the headphones to provide personalized experiences and seamlessly integrate with the wearer’s environment. The four categories that define this context-awareness are comprised of the following: 

  • Context-free actions, which produce similar results regardless of the active application, the wearer’s activity, or the social or physical environment.  
  • Context that is defined by the application with which the wearer is interacting. For example, are they listening to music, on a video call, or watching a movie?  
  • Context that is defined by the wearer’s body. For example, is the wearer’s gesture close to a body part that has an associated meaning? Eyes might relate to visual functions, ears to audio input, and the mouth to audio output. 
  • Context that is defined by the wearer’s environment. For example, are there other devices or people around the wearer with whom they might want to interact?
DIS 2023 - Figure 8: Diagram showing the different levels of context we look at: context free, application, user's body, and the environment.
Figure 8. The system uses diverse contextual information to enable personalized responses to user input.

Looking ahead: Expanding the possibilities of HCI with everyday wearables  

Sensor-enhanced headphones offer a promising avenue for designers to create immersive and context-aware user experiences. By incorporating sensors, these headphones can capture subtle user behaviors, facilitating seamless interactions and enhancing the wearer’s overall experience.  

From safeguarding privacy to providing intuitive control mechanisms, the potential applications for sensor-enhanced headphones are vast and exciting. This exploration with headphones scratches the surface of what context-aware wearable technology can empower its wearers to achieve. Consider the multitude of wearables we use every day that could benefit from integrating similar sensing and interaction capabilities into these devices. For example, imagine a watch that can track your hand movements and detect gestures. By enabling communication between sensor-enhanced wearables, we can establish a cohesive ecosystem for human-computer interaction that spans across applications, devices, and social contexts.

Related publications

Continue reading

See all blog posts