“Spooky action at a distance : Real-time VR interaction for non real- time remote robotics” by Savkin, Quinn and Wilson


Notice: Pod Template PHP code has been deprecated, please use WP Templates instead of embedding PHP. has been deprecated since Pods version 2.3 with no alternative available. in /data/siggraph/websites/history/wp-content/plugins/pods/includes/general.php on line 518
  • ©Pavel A. Savkin, Nathan Quinn, and Lochlainn Wilson

Conference:


  • SIGGRAPH 2019
  • More from SIGGRAPH 2019:
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
     
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345

Type(s):


E-Tech Type(s):



Description:


    We control robots through a simulated environment in game engine using VR and interact with it intuitively. A major breakthrough of this system is that, even if real-time robot control is not possible, the user can interact with the environment in real-time to complete tasks. Our system consists of a robot, vision sensor (RGB-D camera), game engine, and VR headset with controllers. The robot-side visual is provided as a scanned 3D geometry snapshot. We leverage point cloud as a visualization. Given the information to the user, two steps are required to control the robot. First, object annotation is needed. Given virtual 3d objects, the user is asked to place them roughly where they are in VR, therefore making the process intuitive. Next, computer vision-based optimization refines the position to an accuracy level required for robot grasping. Optimization runs using non-blocking threads to maintain real-time experience. Second, the user needs to interact with objects. A robot simulation and UI will assist the process. A virtual robot gripper will provide a stable grasp estimation when it is brought close to a target. Once the object is picked up, placing it is also assisted. As in our example with block construction, each block’s alignment with other blocks is assisted using its geometric characteristics, facilitating accurate placement. During the process, robot actions are simulated then visualized. The simulation and assistance is processed in real-time. Once interaction is given, simulated actions are sent and executed. Interaction and annotation processes can be queued without waiting for a robot to complete each step. Additionally, the user can easily abort planned actions then redo them. Our system demonstrates how powerful it is to combine game engine technologies, VR, and robots with computer vision/graphics algorithms to achieve semantic control over time and space.


PDF:



ACM Digital Library Publication:



Overview Page: