Abstract
An Augmented Reality based Mixed Reality app employs an augmented human-like tutor who guides users through the process of mastering a Sign Language.
Focus: Mixed Reality Interface Design | AR Application Development | Accessibility
Tools: Unity Engine, Figma, Blender, and Magic Leap Device
Duration: March to May 2022

Final Result
Problem Statement

Sign Language, a 3D spatial language, might be difficult to self-learn from 2D resources.
The COVID-19 lockdown forced everything to go virtual including sign language classes. Learning sign language is quite different from learning any other language. Sign language involves the variation in gestures of hands and fingers, the position of signs concerning the body, and particular motion with respect to the body in 3-dimensional space. Currently available digital resources like images, videos, or even virtual classes are restricted to 2-dimensional screens, making it difficult to master sign language using these resources.
Project Overview

How can people effectively learn sign language using advanced technology?
During the global pandemic, where learning sign language was dependent on 2-dimensional mediums, either online virtual classes or resources like images, videos, websites, mobile apps, etc., we should consider ways that would help people effectively learn sign language, on their own, without needing the help of a human instructor or the need to leave their house. Maybe, targeting the issue of 2-dimensional barriers for 3-dimensional sign language can help. Hence, utilizing Mixed Reality technology and its ability to extend into 3-dimensional space might help us solve the problem.
Solution

Mixed Reality can serve as a potential solution to self-learn sign language!
We can look towards using Mixed Reality as a potential solution. In this project, we develop an Augmented Reality (AR) application prototype on MagicLeap, a Mixed Reality (MR) technology, to help the user visualize an augmented human character in their natural environment and teach them signs by making hand gestures and movements of a particular sign.
Users have the ability to zoom in/out, rotate the character, increase/decrease the speed of the sign, and change the view from front-view to top-view. This feature of changing views help users to see the hand from the front and back, so the user can better understand the hand and movement orientation. Moreover, as the augmented human character is in the natural 3D space of the user, one can walk around to see signs from different angles.
Demo
The application helps users self-learn a few signs one might need while traveling from American Sign Language(ASL) by showing an augmented human instructor performing particular sign.
Market Scope
According to Global Market Insights, AR could surge up to

Target Domain
Sign Language is the primary mode of communication for individuals from Deaf and Hard-of-Hearing community

Key Points
A viable solution should provide:

3D character & 360 degree view

A hands-free interactive learning method

Remote and constant availablity

No need for a human instructor

Freedom of movement in space
Ideation
Ideal Product
The desired prototype should be able to show an augmented human character extending into the 3-dimensional space of the user. The prototype should enable users to perform functionalities like zoom in/out, increase/decrease the speed of signs, rotate, and change the view from front-view to top-view of the character for the users to understand the hand orientation better. The users should be able to interact with the system while keeping their hands free to practice sign language and also give them the freedom to move in the space.


Selection of signs
The project focused on making the users learn a few curcial signs. So, we focused on using nine signs, from American Sign Language(ASL), which one can need while traveling. However, not all signs use the same hand gestures or have similar movement; therefore, we focused on the cluster such as single-handed signs (Bathroom, Security, Travel), double-handed signs(Passport, Ticket, Seatbelt), and double-handed signs having complex motion with respect to body(Parachute, International, Emergency Exit).










Implementation
Created 3D human character
The first step of creating the desired application was to make the 3D augmented human character. The 3D character is built using Blender 2.9 plug-in MB-LAB 1.7.8. The hair and cloth were designed and modified on Blender too.


Animating the signs from American Sign Language
For the 3D character to perform the signs, the character is rigged and given motion animation on Blender. The sign animations saved in FBX file format.









Engine and device setup
To display the 3D character on the Magic Leap device, SDKs are intergated with the Unity engine. Magic Leap uses Lumin OS, so it was essential to include Magic Leap Lumin SDK v0.24.1 and Magic Leap Unity Package v.24.2.
Developing functionality
The character performs the functionality of zooming in/out, changing the speed of signs, rotating, or changing the view by a C# script. An animator controller on Unity is created and linked to the sign FBX file; to assign the sign to the 3D character.


Designing User Interface
A control panel and onboarding screens are designed to reflect the interactions and guide, so the users can self-learn sign language; without any external help.






Final Result
Future Scope
Imagine how great it would be if people could choose to learn any sign language, practice signs, and Mixed Reality devices would be able to detect the accuracy of the produced gestures while providing feedback in real-time? However, Magic Leap can only capture eight key poses, restricting the scope.
This prototype can only guide the user to learn signs but cannot provide any feedback on the reproduced gestures. Another challenging aspect, as American Sign Language and other sign languages are vast, is to help users learn sign languages, which need a massive database of gestures. For now, the scope of this prototype is nine signs from American Sign Language that one might need while traveling.
The prototype has the potential to form a polished portable application that can help users learn various signs from multiple sign languages. Maybe, we can also focus on gamifying the learning by providing quizzes or measuring users' progress to the Duolingo application, a language learning service. The prototype can also explore various extended reality interaction methods and user interfaces.