Exploratorium_8.1.1
Altering/Enhancing Sensory Input
- Translating Between Senses

Annotated Bibliography

COPY currently shown bibliography

Gomez, Juan Diego and Mohammed, Sinan and Bologna, Guido and Pun, Thierry. 2011.Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). Association for Computing Machinery, New York, NY, USA, 293–294. https://doi.org/10.1145/2049536.2049613

  • Computer vision based framework for real time object localization and audio description to provide color and depth perception using Microsoft kinect 3D motion sensor.

Making data visualization more accessible for blind and low-vision individuals https://news.mit.edu/2022/data-visualization-accessible-blind-0602

  • An interdisciplinary team of researchers from MIT and elsewhere is striving to create screen-reader-friendly data visualizations that offer a similarly rich experience. They prototyped several visualization structures that provide text descriptions at varying levels of detail, enabling a screen-reader user to drill down from high-level data to more detailed information using just a few keystrokes. They created a framework to help designers think systematically about how to develop accessible visualizations. In the future, they plan to use their prototypes and design framework to build a user-friendly tool that could convert visualizations into accessible formats.