What may eventually connect engineers and linguists most is their common interest in language, more specifically language technology: engineers build more and more intelligent robots desirably communicating with humans through language. Linguists wish to verify their theoretical understanding of language and speech through practical implementations. Robotics is then a place for the two to meet. However, speech, especially within spontaneous communication seems to often withstand usual generalizations: the sounds you hear are not the sounds you describe in a laboratory, the words you read in a written text may be hard to identify by speech segmentation, the sequences of words that make up a sentence are often too fragmented to be considered a "real" sentence from a grammar book. Yet, humans communicate, and this is most often, successful. Typically this is achieved through cognition, where people not only use words, these are used in context. People also use words in semantic context, by combining voices and gestures, in a dynamically changing, multimodal situational context. Each individual does not simply pick out words from the flow of a verbal interaction, but also observes and reacts to other, using multimodal cues as a point of reference and inference making navigation in communication. It is reasonable to believe that participants in a multimodal communication event follow a set of general, partly innate rules based on a general model of communication. The model presented below interperate numerous forms of dialogue by uncovering their syntax, prosody and overall multimodality within the HuComTech corpus of Hungarian. The research aims at improving the robustness of the spoken form of natural language technology.
- Címlap
- Publikációk
- Incompleteness and Fragmentation: Possible Formal Cues to Cognitive Processes Behind Spoken Utterances