Designed and developed a Augmented Reality prototype to assist people to self-learn Sign Language.
The COVID-19 lockdown emphasized the challenges of learning sign language virtually due to its intricate three-dimensional nature, including hand and finger movements, body positioning, and specific motions. Existing digital resources, limited to 2D screens, struggle to convey these nuances effectively.
A 3D augmented human, extending into the person's space, that the person can zoom in/out, adjust sign speed, rotate, and switch between front-view and top-view can enhance comprehension of hand orientation. Having a hands-free solution can helpful to practice sign language while freely moving within the space.
In this project, I created an AR prototype on Magic Leap headset, utilizing Mixed Reality (MR) technology. The app includes a 3D augmented human model teaching sign language through hand gestures and movements.
The Abstract project involved in depth research about users and technologies. Consequently, stitching feasible technical and creative aspect together.
The first step in creating the app was designing the 3D augmented human character. Blender 2.9 and the MB-LAB 1.7.8 plugin were used for meticulous crafting, including hair and clothing design and modifications within Blender.
To animate American Sign Language signs, the 3D character is rigged and animated using Blender. Motion animations are applied and saved in FBX file format.
To display the 3D character on Magic Leap, Unity engine is set up with essential SDKs including Magic Leap Lumin SDK v0.24.1 and Magic Leap Unity Package v.24.2, ensuring compatibility with Magic Leap's Lumin OS.
Character functionality, such as zoom, sign speed, rotation, and view adjustments, is implemented via a C# script in Unity. An animator controller connects to the sign FBX file, seamlessly assigning signs to the 3D character.
The user interface including a control panel and onboarding screens designed on Figma to facilitate interactions and guide users through the self-learning process of sign language independently, eliminating the need for external assistance.
My journey with the Mixed Reality project has been both challenging and exciting, revealing immense potential to address longstanding issues. However, technical constraints, like the bulky form factor, hinder its portability and reliability. Despite limitations, the project ignites enthusiasm for future advancements that could overcome these challenges and unleash even greater potential.