The central goal of this project is to understand how the perceptual system integrates the available audiovisual information, considering looming objects towards the observer.
We propose to simulate dynamic events in an immersive virtual environment, in order to evaluate the following four core objectives:
1. To understand the role of the audio and visual modalities in looming trajectory events. More precisely, we aim to study how accurate and precise the perceptual system is when audio and visual modalities are presented alone and/or simultaneously, the cues that most contribute to subjects’ performance in these conditions and ascertain whether there is a benefit in the presentation of the two sources of information.
2. To investigate if the parameters which are more relevant for subjects judgments regarding perceptual tasks are the same for interceptive actions.
3. To understand the processing of spatiotemporal information within audiovisual events and the role of each modality, isolated or combined. We will ascertain if spatial and temporal information are either modality dependent, task dependent or both. We will access the role of congruency of information in both modalities aiming to understand if, when one of the modalities does not follow the physical rules presented by the other, whether there is a benefit or a compromise to the senses isolated or combined.
4. To test how the properties of the workspace affect the performance in action-perception tasks. Furthermore, we aim to study the effect of uni or multimodal feedback on motor initiation, execution or control. Therefore, we would like to know whether the input from the upper limb has effect on the performance level in a temporal, spatial and spatiotemporal action prediction task.
This project will allow the team to delve further into the study of the spatio-temporal processing of sensorial and motor information generated by action.
Financing: BIAL Foundation