Taylor & Francis Group
Browse
tadr_a_2007168_sm6574.csv (9.86 kB)

Unsupervised lexical acquisition of relative spatial concepts using spoken user utterances

Download (9.86 kB)
dataset
posted on 2021-12-03, 17:40 authored by Rikunari Sagara, Ryo Taguchi, Akira Taniguchi, Tadahiro Taniguchi, Koosuke Hattori, Masahiro Hoguro, Taizo Umezaki

This paper proposes methods for unsupervised lexical acquisition for relative spatial concepts using spoken user utterances. A robot with a flexible spoken dialog system must be able to acquire linguistic representation and its meaning specific to an environment through interactions with humans as children do. Specifically, relative spatial concepts (e.g. front and right) are widely used in our daily lives, however, it is not obvious which object is a reference object when a robot learns relative spatial concepts. Therefore, we propose methods by which a robot without prior knowledge of words can learn relative spatial concepts. The methods are formulated using a probabilistic model to estimate the proper reference objects and distributions representing concepts simultaneously. The experimental results show that relative spatial concepts and a phoneme sequence representing each concept can be learned under the condition that the robot does not know which located object is the reference object. Additionally, we show that two processes in the proposed method improve the estimation accuracy of the concepts: generating candidate word sequences by class n-gram and selecting word sequences using location information. Furthermore, we show that clues to reference objects improve accuracy even though the number of candidate reference objects increases.

Funding

This work was supported by an MEXT Grant-in-Aid for Scientific Research on Innovative Areas under Grant JP16H06569.

History