Robot Mapping using Laser and Kinect-like Sensor

Ғылым және технология

Comparison between real and virtual 3rd person views of a robot mapping an environment using RTAB-Map. Five books are also detected using Find-Object during the experiment.
The code is publicly available on introlab.github.io/rtabmap.
For objects recognition, see introlab.github.io/find-object/ .

Пікірлер: 47

  • @user-gs8qm9rw6r
    @user-gs8qm9rw6r5 жыл бұрын

    nice work! i love this video

  • @elmichellangelo
    @elmichellangelo2 жыл бұрын

    To think was made a couple of years ago. Big up to you.

  • @antonisvenianakis1047
    @antonisvenianakis10473 жыл бұрын

    Very interesting, thank you!

  • @magokeanu
    @magokeanu7 жыл бұрын

    amazing dude!

  • @VicConner
    @VicConner9 жыл бұрын

    Amazing!

  • @dubmona1301
    @dubmona13017 жыл бұрын

    Way of the future.Awsome.

  • @user-ps1ug9fh3w
    @user-ps1ug9fh3w8 жыл бұрын

    nice job! could u tell me ,how u do the navigation?

  • @timurtt1
    @timurtt19 жыл бұрын

    Excellent demo! Coud you please clarify - how do you handle new scene points descovered by the robots? Do you add all of them into you scene or do you perform some sort of smart merging? What is the "add point to the scene"-rate you have?

  • @matlabbe

    @matlabbe

    9 жыл бұрын

    Anton Myagotin The map's graph is filtered to keep around 1 node/meter, then a point cloud is created from each filtered node. In the visualization above, some clouds are effectively superposed.

  • @fighterknigh
    @fighterknigh8 жыл бұрын

    Great job, btw, how do you find the object? Do you build the object point clouds model before searching?

  • @matlabbe

    @matlabbe

    8 жыл бұрын

    +余煒森 Objects are found using RGB images. Visual features (SURF) are extracted from images of the books, then they are compared to the live RGB stream to find the same visual features.

  • @isaiabinadabrochasegura5972
    @isaiabinadabrochasegura59727 жыл бұрын

    Hi, very good job. I have a question about how you connect the Kinect to the voltage, you used some battery? Or some current inverter?

  • @matlabbe

    @matlabbe

    7 жыл бұрын

    On this demo, it is a Xtion Live Pro, which is powered by USB for convenience. For a Kinect v1, we could cut the wire and plug into a 12V dc output directly on the board of the robot.

  • @kapilyadav23
    @kapilyadav239 жыл бұрын

    Hi... can u tell me what processor or dev-board are you using to process the kinect and lazer data.. ? btw... nice video.. impressed with your results.... :)

  • @mathieulabbe4889

    @mathieulabbe4889

    9 жыл бұрын

    kapil yadav It is a mini ITX with an i7 + SSD (no GPU). It is running Ubuntu 12.04 + ROS Hydro.

  • @barthurs99
    @barthurs996 жыл бұрын

    Oh man this is prefect for what I'm doing I making a robot with room mapping that will be like a security guard but I'm using Li-dar mapping and camera object recognition and object follow and facial recognition and you use some of that right?

  • @matlabbe

    @matlabbe

    6 жыл бұрын

    In this demo, SLAM is done with lidar and rgb-d camera from "rtabmap_ros" package and books are detected using "find_object_2d" package. There is no face detection or object following here. cheers!

  • @ayarzuki

    @ayarzuki

    3 жыл бұрын

    @@matlabbe what if we combine with object detection using camera?

  • @kiefac
    @kiefac7 жыл бұрын

    It seemed to throw away a lot of points after they were out of the FOV of the camera. Is that to prevent distance inaccuracies or keep the performance up or smth?

  • @matlabbe

    @matlabbe

    7 жыл бұрын

    We keep the map downsampled for rendering performance. Maybe with new GPUs, we could keep the map more dense while keeping smooth visualization. We can see at the end of the video, the rendering frame rate is already lower than at the beginning.

  • @kiefac

    @kiefac

    7 жыл бұрын

    matlabbe ah alright.

  • @mattizzle81

    @mattizzle81

    4 жыл бұрын

    Point clouds are very memory intensive! I am doing a similar type of pointcloud mapping on Android, using ARCore. One of the first things I noticed is how hard it is to keep that many points. If I compute points for an entire camera frame, and try to keep them, the device would run out of memory after about 30 seconds. Luckily all I need really is a birds eye view perspective, so I project the points to a 2D image and that works fine.

  • @DerekDickerson
    @DerekDickerson9 жыл бұрын

    matlabbe so you must have a laser scanner as well outside of the kinect?

  • @matlabbe

    @matlabbe

    9 жыл бұрын

    It is not required, but increase the precision of the mapping. In this video: kzread.info/dash/bejne/kaWdrqOQoJqnobQ.html , only the Kinect is used.

  • @sylvesterfowzan5417
    @sylvesterfowzan54175 жыл бұрын

    i'm working on a humanoid robot currently we need to perform navigation and perception could you help us what hardwares and how to perform using ROS?

  • @matlabbe

    @matlabbe

    5 жыл бұрын

    The current documentation is on ros.org: wiki.ros.org/rtabmap_ros/Tutorials, when you have specific questions, you can ask them on ROS answers (answers.ros.org/questions/) or for on RTAB-Map's forum (official-rtab-map-forum.67519.x6.nabble.com/)

  • @suzangray6483
    @suzangray64836 жыл бұрын

    Hi, What is your robot acting on? So is it moving according to the laser data or camera data or are you checking with the remote control? I also have a laser scanner and I can get 3d the image of the environment and the distance of the nearest object . But I want to communicate this with a tool like yours. I'm very happy if you can help me how to do it. Thank you

  • @matlabbe

    @matlabbe

    6 жыл бұрын

    The robot is tele-operated using a gamepad. The best way to communicate the data you have to rtabmap is to use ROS and publish the right topics. See this example wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_Odometry_.2B-_2D_laser for more info!

  • @suzangray6483

    @suzangray6483

    6 жыл бұрын

    I understood. I have 3dm-gx1 ımu. I want to get odometry messages with IMU instead of encoders. But as far as I know, linear velocity data to x and y axis is not obtained with IMU.How can ı measure odometry values with IMU.?

  • @matlabbe

    @matlabbe

    6 жыл бұрын

    It is possible (integrating two times the acceleration), but it will have a lot of drift. You said you have a laser scanner, you can use it to get x,y parameters, or estimate odometry only with the laser scanner (like hector mapping). Here is another setup here: wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-_2D_laser

  • @ripleylee5726
    @ripleylee57264 жыл бұрын

    Hi may I know if you have done any project with intel realsense D435i Camera before?

  • @matlabbe

    @matlabbe

    4 жыл бұрын

    D435i is integrated in RTAB-Map standalone application (for hand-held mapping). It can be also used like the kinect above with rtabmap_ros package. Note that any RGB-D cameras and stereo cameras can be used with rtabmap_ros right know, if they comply with the standard ros interface for images.

  • @AndreaGulberti
    @AndreaGulberti8 жыл бұрын

    OMG

  • @masahirokobayashi911
    @masahirokobayashi9117 жыл бұрын

    What kind of sensors do you use?

  • @matlabbe

    @matlabbe

    7 жыл бұрын

    On this demo: URG-04LX, Xtion Pro Live and wheel odometry from the robot.

  • @amirparvizi3997
    @amirparvizi39977 жыл бұрын

    how did i download this video

  • @Uditsinghparihar

    @Uditsinghparihar

    6 жыл бұрын

    Go to:- en.savefrom.net/1-how-to-download-youtube-video/ Then paste the url of any youtube video (in your case this video's url) in the box.

  • @xninjas3138
    @xninjas31383 жыл бұрын

    There use arduino?

  • @matlabbe

    @matlabbe

    3 жыл бұрын

    a (2009 if I remember) Intel NUC is on the robot, the drive motors use custom boards

Келесі