“3D motion sensing of any object without prior knowledge” by Miyashita, Yonezawa, Watanabe and Ishikawa
Conference:
Type(s):
Title:
- 3D motion sensing of any object without prior knowledge
Session/Category Title: Tracking and Transients
Presenter(s)/Author(s):
Abstract:
We propose a novel three-dimensional motion sensing method using lasers. Recently, object motion information is being used in various applications, and the types of targets that can be sensed continue to diversify. Nevertheless, conventional motion sensing systems have low universality because they require some devices to be mounted on the target, such as accelerometers and gyro sensors, or because they are based on cameras, which limits the types of targets that can be detected. Our method solves this problem and enables noncontact, high-speed, deterministic measurement of the velocity of a moving target without any prior knowledge about the target shape and texture, and can be applied to any unconstrained, unspecified target. These distinctive features are achieved by using a system consisting of a laser range finder, a laser Doppler velocimeter, and a beam controller, in addition to a robust 3D motion calculation method. The motion of the target is recovered from fragmentary physical information, such as the distance and speed of the target at the laser irradiation points. From the acquired laser information, our method can provide a numerically stable solution based on the generalized weighted Tikhonov regularization. Using this technique and a prototype system that we developed, we also demonstrated a number of applications, including motion capture, video game control, and 3D shape integration with everyday objects.
References:
1. Alcantarilla, P. F., Nuevo, J., and Bartoli, A. 2013. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the British Machine Vision Conference (BMVC ’13), 13.1–13.11.
2. Calonder, M., Lepetit, V., Strecha, C., and Fua, P. 2010. BRIEF: Binary robust independent elementary features. In Proceedings of the 11th European Conference on Computer Vision (ECCV ’10), 778–792.
3. Curless, B., and Levoy, M. 1996. A volumetric method for building complex models from range images. In Proceedings of the 23rd ACM International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH ’96), 303–312.
4. Engelhard, N., Endres, F., Hess, J., Sturm, J., and Burgard, W. 2011. Real-time 3D visual SLAM with a hand-held camera. In Proceedings of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum.
5. Granger, S., and Pennec, X. 2002. Multi-scale EM-ICP: A fast and robust approach for surface registration. In Proceedings of the 7th European Conference on Computer Vision (ECCV ’02), 418–432.
6. Harrison, C., and Hudson, S. E. 2009. Abracadabra: Wireless, high-precision, and unpowered finger input for very small mobile devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST ’09), 121–124.
7. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., and Fitzgibbon, A. 2011. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ’11), 559–568.
8. Kato, H., and Billinghurst, M. 1999. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR ’99), 85–94.
9. Kehl, R., and Gool, L. V. 2006. Markerless tracking of complex human motions from multiple views. Computer Vision and Image Understanding 104, 2.
10. Ketabdar, H., Yüksel, K. A., and Roshandel, M. 2010. Magitact: Interaction with mobile devices based on compass (magnetic) sensor. In Proceedings of the 15th International Conference on Intelligent User Interfaces (IUI ’10), 413–414.
11. Klein, G., and Murray, D. 2007. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR ’07).
12. Le Goc, M., Taylor, S., Izadi, S., and Keskin, C. 2014. A low-cost transparent electric field sensor for 3D interaction on mobile devices. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems (CHI ’14), 3167–3170.
13. Lepetit, V., Moreno-Noguer, F., and Fua, P. 2009. EPnP: An accurate O(n) solution to the PnP problem. International Journal Computer Vision 81, 2.
14. Liu, Y. 2006. Automatic registration of overlapping 3D point clouds using closest points. Image and Vision Computing 24, 7.
15. Mitobe, K., Kaiga, T., Yukawa, T., Miura, T., Tamamoto, H., Rodgers, A., and Yoshimura, N. 2006. Development of a motion capture system for a hand using a magnetic three dimensional position sensor. In Proceedings of the 33rd ACM International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH ’06), Posters.
16. Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., and Fitzgibbon, A. 2011. Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR ’11), 127–136.
17. Nishitani, A., Nishida, Y., and Mizoguch, H. 2005. Omnidirectional ultrasonic location sensor. In Proceedings of IEEE Sensors 2005.
18. Noguchi, S., Watanabe, Y., and Ishikawa, M. 2013. High-resolution surface reconstruction based on multi-level implicit surface from multiple range images. In Proceedings of the 20th IEEE International Conference on Image Processing (ICIP ’13), 2140–2144.
19. Raskar, R., Nii, H., deDecker, B., Hashimoto, Y., Summet, J., Moore, D., Zhao, Y., Westhues, J., Dietz, P., Barnwell, J., Nayar, S., Inami, M., Bekaert, P., Noland, M., Branzoi, V., and Bruns, E. 2007. Prakash: Lighting aware motion capture using photosensing markers and multiplexed illuminators. ACM Transactions on Graphics 26, 3.
20. Rothberg, S., Baker, J., and Halliwell, N. A. 1989. Laser vibrometry: pseudo-vibrations. Journal of Sound and Vibration 135, 3.
21. Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. 2011. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision (ICCV ’11), 2564–2571.
22. Shiratori, T., Park, H. S., Sigal, L., Sheikh, Y., and Hodgins, J. K. 2011. Motion capture from body-mounted cameras. ACM Transactions on Graphics 30, 4.
23. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., and Blake, A. 2011. Real-time human pose recognition in parts from a single depth image. In Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’11), 119–135.
24. Vlasic, D., Adelsberger, R., Vannucci, G., Barnwell, J., Gross, M., Matusik, W., and Popović, J. 2007. Practical motion capture in everyday surroundings. ACM Transactions on Graphics 26, 3.
25. Wang, Y., Yagola, A., and Yang, C. 2012. Computational Methods for Applied Inverse Problems. Higher Education Press.
26. Watanabe, Y., Komuro, T., and Ishikawa, M. 2007. 955-fps real-time shape measurement of a moving/deforming object using high-speed vision for numerous-point analysis. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ’07), 3192–3197.
27. Watanabe, Y., Komuro, T., and Ishikawa, M. 2009. High-resolution shape reconstruction from multiple range images based on simultaneous estimation of surface and motion. In Proceedings of the 12th IEEE International Conference on Computer Vision (ICCV ’09), 1787–1794.
28. Zerroug, A., Cassinelli, A., and Ishikawa, M. 2011. Invoked computing: Spatial audio and video AR invoked through miming. In Proceedings of the Virtual Reality International Conference (VRIC ’11), 31–32.


