“Spooky action at a distance : Real-time VR interaction for non real- time remote robotics” by Savkin, Quinn and Wilson
Conference:
Type(s):
E-Tech Type(s):
- 3D Scanning
Entry Number: 10
Title:
- Spooky action at a distance : Real-time VR interaction for non real- time remote robotics
Developer(s):
Description:
We control robots through a simulated environment in game engine using VR and interact with it intuitively. A major breakthrough of this system is that, even if real-time robot control is not possible, the user can interact with the environment in real-time to complete tasks. Our system consists of a robot, vision sensor (RGB-D camera), game engine, and VR headset with controllers. The robot-side visual is provided as a scanned 3D geometry snapshot. We leverage point cloud as a visualization. Given the information to the user, two steps are required to control the robot. First, object annotation is needed. Given virtual 3d objects, the user is asked to place them roughly where they are in VR, therefore making the process intuitive. Next, computer vision-based optimization refines the position to an accuracy level required for robot grasping. Optimization runs using non-blocking threads to maintain real-time experience. Second, the user needs to interact with objects. A robot simulation and UI will assist the process. A virtual robot gripper will provide a stable grasp estimation when it is brought close to a target. Once the object is picked up, placing it is also assisted. As in our example with block construction, each block’s alignment with other blocks is assisted using its geometric characteristics, facilitating accurate placement. During the process, robot actions are simulated then visualized. The simulation and assistance is processed in real-time. Once interaction is given, simulated actions are sent and executed. Interaction and annotation processes can be queued without waiting for a robot to complete each step. Additionally, the user can easily abort planned actions then redo them. Our system demonstrates how powerful it is to combine game engine technologies, VR, and robots with computer vision/graphics algorithms to achieve semantic control over time and space.