在视听语音感知中,在相应的音频语音开始之前,在嘴部发音期间来自说话者面部的视觉信息是可用的,从而允许感知者使用视觉信息来预测即将到来的音频。与单独的音频语音感知相比,这种来自语音一致的视觉信息的预测会调节视听语音感知,并导致 N1 和 P2 幅度和延迟的减少。视听体验(例如音乐训练)是否会影响这一预测尚不清楚,但如果是这样,则可以解释之前研究中观察到的一些变化。目前的研究探讨了视听语音感知是否受到音乐训练的影响,首先评估 N1 和 P2 事件相关电位 (ERP),此外还评估试验间阶段连贯性 (ITPC)。音乐家和非音乐家在仅音频 (AO)、仅视频 (VO) 和视听 (AV) 条件下呈现音节 /ba/。通过从 AV 语音(AV-VO)中分离出的嘴部运动的预测效果,结果表明,与音频语音相比,两组的 N1 潜伏期和 P2 幅度和潜伏期较低。此外,他们还在视听语音感知的 delta、theta 和 beta 频段表现出较低的 ITPC。然而,音乐家在视听语音中表现出对 N1 幅度的显着抑制和 alpha 频段的去同步化,而对于非音乐家来说则不存在。总的来说,目前的研究结果表明,早期的感觉处理可以通过音乐经验来改变,这反过来又可以解释之前 AV 语音感知研究中的一些变化。 In audiovisual speech perception, visual information from a talker's face during mouth articulation is available before the onset of the corresponding audio speech, and thereby allows the perceiver to use visual information to predict the upcoming audio. This prediction from phonetically congruent visual information modulates audiovisual speech perception and leads to a decrease in N1 and P2 amplitudes and latencies compared to the perception of audio speech alone. Whether audiovisual experience, such as with musical training, influences this prediction is unclear, but if so, may explain some of the variations observed in previous research. The current study addresses whether audiovisual speech perception is affected by musical training, first assessing N1 and P2 event-related potentials (ERPs) and in addition, inter-trial phase coherence (ITPC). Musicians and non-musicians are presented the syllable, /ba/ in audio only (AO), video only (VO), and audiovisual (AV) conditions. With the predictory effect of mouth movement isolated from the AV speech (AV-VO), results showed that, compared to audio speech, both groups have a lower N1 latency and P2 amplitude and latency. Moreover, they also showed lower ITPCs in the delta, theta, and beta bands in audiovisual speech perception. However, musicians showed significant suppression of N1 amplitude and desynchronization in the alpha band in audiovisual speech, not present for non-musicians. Collectively, the current findings indicate that early sensory processing can be modified by musical experience, which in turn can explain some of the variations in previous AV speech perception research.