Sound detection app for Deaf and Hard-of-Hearing users
Focus: UI/UX Design | Accessibility | App-Design
Tools: Adobe XD and Illustrator
Duration: March to May 2020
Some individuals have limited to no access of sound
Audio input, along with visual input, is important for our survival. However, Deaf or Hard of Hearing(DHH) individuals may have limited or no access to auditory information, making it difficult to interpret the world around them.
Application designed to detect environment sounds and visualize only vital information
Deaf or Hard-of-Hearing individuals use their visual senses more to compensate for hearing impairment. Hence, we designed an application which detects environment sound and help them visualize vital information a minimalist visualization.
Why this particular solution?
1. Currently, smart phones and smart watches are most commonly used devices.
2. To detect sounds from the environment, the device needs to take audio input. As these devices have built-in microphones, the app can directly use these sensors.
3. These devices can also provide data with audio and haptic cues.
Deaf or Hard-of-Hearing smartphone users
Deaf and Hard-of-Hearing individuals use their visual senses more to compensate for hearing impairment. Hence, we provide a minimalist It was crucial to understanding our target audience problems to design a solution which perfectly fits needs, hence, it was important to conduct thorough research with relevant users to extrapolate required information.
I helped draft scripts for the interviews and facilitated them with participants which informed our design direction.
I led the concept generation stage and formulated the design direction that is used in the final design.
I was the in-charge of creating sketches, low and high fidelity prototypes that reflected our concepts and allowed us to test our designs.
Interviewed 4 DHH users
Four volunteer participants were recruited through social media. Of these four, two identified as male, 1 female, and 1 non-binary. The interviews were completely open-ended
We asked questions regarding
Their experiences and issues with current sound detecting applications/software?
What type of sounds do they want to be aware about?
What information would they like to know about the sound?
What features would they like in a sound detection application?
Reduced Cognitive Load
The main purpose of the application is to detect sound hence it is set as the first screen that the user would see when they open the application. When the user clicks on the circle shown in the middle of the screen, the system capture, analysis environmental sound, and shows the required information about the sound.
Need for Profile
Other than the social and virtual presence, account creation was needed to store vital information that would be sent to authorities in case of an emergency. Also, the user's notification choice, list of sound, and other preferences can be stored as well.
Text 911 feature
This feature allows DHH users to quickly share their information in case of an emergency. Sound name, its location from the user's device, severity along with the user's name and location will be share. Users can modify all the sounds to enable or disable this feature.
List of Sound
The list represents all the sound database. Users can add new sound data or delete unwanted data. Users can also customize existing data according to their needs. The red dash on the right side of the sound represents that the 'Text 911' feature is enabled.
Add new sounds
The user has the ability to add a new sound or some customizable sound. Users can add their friends' voices to the list to detect their call.
Sounds represented by Icons
In order for the users to recognition sound quickly, easier, and intuitively. Research shows that DHH users have heightened visual sense, hence icons are made the primary representation of sound. Also, more space is given to icons in order to gain more attention from users.
Textual representation of sounds
Just in case, if the user is unable to relate the icon with the sound, the sound will be textually represented as well.
Important data visualization
Our interview participants were more concerned about the sound, its direction, and its severity level. The direction is shown in a circular format representing the compass design around the sound icon.
Generally, when we consider a traffic light, humans associate green as a positive color, red as negative, and yellow as mid color. Hence, we choose green to indicate low severity, red to indicate high severity, and yellow to indicate mid severity.
Severity of a sound
Sound is usually divided into three levels depending on its unit of measurement, i.e. decibel (dB). 0 to 75dB is considered as low severity. 76dB to 120dB is considered as mid severity. The sound that falls above 120dB, is considered to have high severity.
Sub-division of severity scale
Each severity is further divided into 5 parts depending on the cellphone’s location from the sound source. These five divisions would work as a scale of 1 to 5, where 1 being close to sound score and 5 being far (1: very close, 2: close, 3: neutral, 4: far, 5: very far).
Emojis for the color-blind
Keeping color-blindness in mind, we chose to indicate different level of severity using different emoticons. Also, if the user is taking a walk then the application would detect a lot of sounds, and reading information about each detected sound would be tedious for the user. Hence we choose emoji.
Audio and Haptic cues for Blind users
We can let the blind users be aware about the sound in their environment through audio cues. We can also provide haptic feedback in order to let blind-deaf users be aware of the sound.
Watch integration for ease of use
We can integrate the application with smartwatch so it is easier for the users to use the application. Smart Watch will enable users to use the app without constantly looking at their mobile phone.