Burke, Prewett, and Elliot (2006) attempted to look at the effect multimodal processing may have during target acquisition tasks through a meta-analysis. They wanted to dichotomously compare studies where multitasking was involved in tasks that contained only visual feedback, visual-auditory feedback, and visual-tactile feedback. According to Wickens Multiple Resource Model (1980), the instances where multiple modalities are present should yield better results than the singular modality on target acquisition tasks. This was observed in the analysis showing an overall higher mean effect size for reaction time and task success for the multimodal feedback, indicating that both forms of multimodal feedback were more effective than visual feedback alone in enhancing performance on target acquisition tasks. This supports the idea that using two different types of modalities to convey information can reduce the cognitive processing efforts required for effective task …show more content…
Vision dominates 80% of our perception of the world, and when another modality conflicts with what you see vision usually wins out (Guttman, Gilroy, & Blake, 2005). When vision and other sensory modalities come into play it seems that vision alters the perception of all other modalities (Shimojo & Shams, 2001). This has been observed through an interaction between perceived tactile and visual information. In one case, if you have your eyes closed and a finger or other object is placed very close to the skin you might not perceive anything at all until the object touches you. However, if you were to observe someone move an object close to your skin you might perceive a touch before contact is made on the skin, altering our perception of the tactile event (Rock & Victor, 1964). There can also be discrepancies between what is heard and what is seen. An example of this would be the McGurk effect (McGurk & MacDonald, 1976). This phenomenon is observed when you perceive a sound, for example “ba”, and then pair those stimuli with lip movements that are mouthing “da.” Even though you heard “ba” before the visual stimuli was presented you tend to understand the auditory stimuli as “da” once they are paired. This seems to suggest that when multiple modalities attempt to work together to complete a single task we rely on our vision to receive feedback