Why is this Important?
Stereoscopic vision has long been considered the best type of visualization in terms of matching physical and simulated realities. While stereoscopic vision is the goal, producing a perfect 3D visualization and registration of AR assets is difficult using current technologies. Addressing the shortcomings of current AR displays will be prohibitively high and present a financial barrier to AR adoption in enterprises.
Leveraging advancements in Digital Signal Processing (DSP) and audiology, a new class of devices are emerging. Spatially-aware audio transducers can help determine the exact position and posture/pose of the wearer as well as generate a simulated sound field that matches the physical environment. Such systems could be combined with existing vision-centric displays for high fidelity enterprise AR experiences.
The scope of this topic includes measurement of the spatial audio technology resource requirements and impacts of combining visual cues with spatial audio on user performance. Comparative studies of human cognitive performance aided by varying blends of spatial technology ranging from “audio-only” to “video-only” and various combinations of both are also in scope.
Stakeholders
AR experience designers, developers of integrated sensor and world capture components, human factors researchers
Possible Methodologies
This research topic will require development of visual and audio AR experiences to be produced in a highly controlled laboratory environment within which a series of experiments can be conducted and reproduced. Studies will compare spatial audio requirements to vision-only AR experiences on the basis of accuracy, speed, battery life, bandwidth requirements, processor performance, wearer comfort and pricing. In addition to user perception assessments through surveys and interviews, methods could be expanded to include time-motion studies using standardized, public and well-documented processes typical of industry verticals, use cases and horizontal use case categories.
Research Program
This topic is at the intersection of both 3D visualization and 3D audio. The methodologies and tools developed for this research could be used in the study of perception, presence, and lead to new guidelines for AR developers and manufacturers of HMDs for enterprise AR.
Miscellaneous Notes
In 2016, the Sound of Vision consortium, which focuses on the construction of a new prototype electronic travel aids for the blind published a report about audio-assisted vision. A peer-reviewed article presenting a novel technique for reproducing coherent audio visual images for multiple users, only wearing 3D glasses and without utilizing head tracking was published in 2011 in the Journal of The Audio Engineering Society.
Keywords
Spatial audio, effectiveness, spatial vision, 3D audio, perception, audio signal processing, acoustic waves, active noise control
Research Agenda Categories
Technology, End User and User Experience, Displays
Expected Impact Timeframe
Medium
Related Publications
Using the words in this topic description and Natural Language Processing analysis of publications in the AREA FindAR database, the references below have the highest number of matches with this topic:
More publications can be explored using the AREA FindAR research tool.
Author
Peter Orban, Christine Perey
Last Published (yyyy-mm-dd)
2021-08-31