The Georgia Tech Sonification Lab is an interdisciplinary research group based in the School of Psychology and the School of Interactive Computing at Georgia Tech. Under the direction of Prof. Bruce Walker, the Sonification Lab focuses on the development and evaluation of auditory and multimodal interfaces, and the cognitive, psychophysical and practical aspects of auditory displays, paying particular attention to sonification. Special consideration is paid to Human Factors in the display of information in "complex task environments," such as the human-computer interfaces in cockpits, nuclear powerplants, in-vehicle infotainment displays, and in the space program.
Since we specialize in multimodal and auditory interfaces, we often work with people who cannot look at, or cannot see, traditional visual displays. This means we work on a lot of assistive technologies, especially for people with vision impairments. We study ways to enhance wayfinding and mobility, math and science education, entertainment, art, music, and participation in informal learning environments like zoos and aquariums.
The Lab includes students and researchers from all backgrounds, including psychology, computing, HCI, music, engineering, and architecture. Our research projects are collaborative efforts, often including empirical (lab) studies, software and hardware development, field studies, usabilty investigations, and focus group studies.
Summary of Key Research Areas
Our Research web pages have tons of information, links, videos, and documents about our many lines of research, and scores of projects. In summary, though, we work ing the following main areas.
1. Sonification and auditory displays.
Determining which type of display is appropriate for a system, and then how best to implement it, is a growing challenge, especially as devices continue to shrink in size. The use of sound to communicate information has become more common, but there is relatively little theory to guide auditory display designers. Therefore, we study the perception and understanding of auditory displays, and we are helping to build up both the theoretical and practical foundations. In particular, our lab studies sonification, the use of sound to display and analyze scientific data. Our findings about how listeners interpret these auditory graphs is leading to more effective data exploration tools, for both sighted and visually impaired researchers and students.
2. Human-Computer Interaction (HCI) in Non-Traditonal Interfaces.
In situations where there is not necessarily a monitor, keyboard, mouse, etc., what are the best ways to create a successful interaction between the user and the system? Designers need to "think outside the box" and utilize novel interaction style, non-traditional interfaces, and make use of all sensory modalities. Certainly auditory displays fit into this category. However, tactile, voice, and vibration interfaces also apply, as do many others we have not even imagined yet!
3. Psychological and social factors in the adoption and use of technology.
When first introduced, any new technology will raise both fears and excitement. What are the traits that help a new technology to become accepted and adopted by users so much that it becomes part of our daily lives (e.g., telephones, microwaves, electronic mail)? I am beginning to examine the many factors that contribute to the evolution of a device from "new technology" to "household appliance".
Our research has been, or currently is funded, by numerous grants from the NSF, NIDRR, Wireless RERC, US Army, Nokia, Temco, and others. We are very appreciative for this support.
NEWS: We have a variety of projects that need grad students to lead research, programmers of all types for implementation, HCI students to conduct studies, and undergraduate research assistants. Some projects have funding, so paid work is a possibility; other opportunities are intended to be completed first for course credit (special topics, etc.) in CS, Psych, or HCI. See the Opportunities page for more information.