论文部分内容阅读
该文系统地研究了中国手语模态对句子加工的影响,使用事件相关电位(ERP)技术来探索句子加工过程中手势模态的语义和语音信息对词汇通达的影响和时间过程。设计实验材料时,对句尾变化条件下目标手势的语义和音韵形式进行了系统的人为操纵。通过18个被试的实验,观察到将初级视觉特征映射到手势音韵特征的手势感知证据(时长100~200ms),紧接着是将目标手势音韵特征与语境信息匹配的负成分(时长200~400ms),以及表征语篇语境对语义制约的动态建构的负成分(时长400~650ms),它促进了手势的音义信息与句子信息的整合。音近和义近手势在时间进程和效应方向上存在着相似性,表明手语模态的词汇通达与有声语言类似,并消耗类似的在线处理成本。同时发现了手语模态独有的表征手语音韵特征加工已经完成的N300效应,它代表了手势音韵元素识别和手势识别之间的直接联系。实验结果表明:在进行句子加工时,被试首要考虑的是语境和目标手势匹配,并不是与手势的音韵特征匹配。这种结果可能是词汇选择时音近和义近相关信息相互作用的标志。
This paper systematically studies the influence of Chinese sign language modal on sentence processing and uses incident-related potentials (ERP) to explore the influence and time course of the semantic and phonetic information of gesture modalities on the process of word processing. When designing the experimental materials, the semantic and phonological forms of the target gestures are systematically and manly manipulated under the condition of end-of-sentence changes. Through the experiments of 18 subjects, the evidence of gestural perception (from 100 to 200ms) that maps the primary visual features to the gestural phonological features is observed, followed by the negative components that match the phonological features of the target gestures to the contextual information (duration 200 ~ 400ms), as well as the negative component (duration 400 ~ 650 ms) characterizing the dynamic construction of the semantic constraints on the discourse context, which promotes the integration of the semantic information and the sentence information of the gesture. The similarities and differences in temporal proximity and the direction of effect between the near-sounding and the near-sense gestures suggest that the vocabulary accessibility of sign language modalities is similar to that of the phonetic language and consumes similar online processing costs. At the same time, the N300 effect, which is characteristic of sign language modality, has been found to be characteristic of sign language phonological feature processing, and it represents a direct connection between gesture phonological element recognition and gesture recognition. The experimental results show that when the sentence is processed, the primary consideration of the participants is that the context matches the target gesture, not the phonetic feature of the gesture. This result may be a hallmark of the interaction of near-sound and near-sensitive information at lexical selection.