top of page

Pilot studies

Virtual reality

​

Funded by:

   Flexi Grant, Action on Hearing Loss (£5,000)

   National Science Foundation: Engaging Learning Network ($2,000)

Role: PI

​

After taking part in the Games for Change Hackathon 2017 I began developing the idea to create a virtual reality speech-in-noise assessment for children. This project is currently in the software development stage (with Game Theory co) with testing to commence shortly. We will be comparing how children perform in a normal testing environment to virtual reality with the Oculus Rift and Go headsets.

​​

  1. Stewart, H. J. (August 2018). Scientifically Speaking. AoHL magazine. <here>

​

​

​

​

​

​

​

​

​

​

Statistical learning and language learning

​

Collaborators: Jennifer Vannest, University of Cincinnati and Elena Plante lab, University of Arizona

​

We are using a statistical learning paradigm to investigate novel language learning in children with listening difficulties (SICLiD) and hearing impairment (OtiS). We are also piloting adding background noise to the statistical learning paradigm.

​

Auditory figure-ground segregation

​

Collaborators: Phillip Gander, University of Iowa and Emma Holmes, UCL

​

We are using a figure-ground task to assess how well children with listening difficulties (SICLiD) are able to extract a coherent tone pattern from a stochastic background of tones.

​

pilot1000.png
VRtreehouse.png
bottom of page