“Using mediator objects to easily and robustly teach visual objects to a robot” by Rouanet, Oudeyer and Filliat

  • ©Pierre Rouanet, Pierre-Yves Oudeyer, and David Filliat

  • ©Pierre Rouanet, Pierre-Yves Oudeyer, and David Filliat

Conference:


Type:


Entry Number: 100

Title:

    Using mediator objects to easily and robustly teach visual objects to a robot

Presenter(s)/Author(s):



Abstract:


    Social robots are drawing an increasing interest both in scientific and economic communities and one of the main issues is the need to provide these robots with the ability to interact easily and naturally with humans. We believe that the interaction issues may have a very strong impact on the whole system and should be given more attention. Current research however focus mainly on the the visual perception and/or machine learning issues (see for example Steels and Kaplan [1]). We think that by focusing on the users and on the interface we can help them provide the learning system with very high quality learning examples.

References:


    L. Steels and F. Kaplan, “Aibo’s first words: The social learning of language and meaning,” Evolution of Communication, vol. 4, no. 1, pp. 3–32, 2000. {Online}. Available: http://www3.isrl.uiuc.edu/junwang4/langev/localcopy/pdf/steels02aiboFirst.pdfGoogle Scholar
    F. Lömker and G. Sagerer, “A multimodal system for object learning,” in Proceedings of the 24th DAGM Symposium on Pattern Recognition. London, UK: Springer-Verlag, 2002, pp. 490–497. Google ScholarDigital Library
    P. Rouanet, P.-Y. Oudeyer, and D. Filliat, “An integrated system for teaching new visually grounded words to a robot for non-expert users using a mobile device,” in Proceedings of the Humanoids 2009 Conference, 2009.Google Scholar


ACM Digital Library Publication:



Overview Page: