Humanoid Robot LOLA - Vision Guided Autonomous Multi-Contact Locomotion

Ғылым және технология

In this video we demonstrate fully autonomous multi-contact locomotion for our humanoid robot LOLA. In contrast to our previous multi-contact videos where contact points for the feet and hands had to be specified manually by the user, this time all contacts are autonomously planned by the robot itself based on the perceived environment. The only input by the user is the desired final goal position (a horizontal position and rotation around the vertical axis). The robot then automatically computes a feasible contact sequence (if possible) and connects the discrete poses with kinematically and dynamically feasible trajectories while considering multi-contact effects (external forces applied at the hands of the robot).
Through this experiments we demonstrate the coupling of LOLA's new computer vision (Chair for Computer Aided Medical Procedures & Augmented Reality, TUM) and walking pattern generation (Chair of Applied Mechanics, TUM) systems. All algorithms run onboard an in real-time. The scene is not known to the robot (it has to detect it on its own).
For more information on LOLA please see our project's website:
www.mec.ed.tum.de/en/am/resea...
This work is supported by the German Research Foundation (DFG, project number 407378162).

Пікірлер: 3

  • @johanndirnberger3669
    @johanndirnberger36692 жыл бұрын

    Sehr gut, Lola! Damit bist du schon schlauer als viele meiner Mitmenschen, die Treppengeländer ignorieren und dann die Treppe runterfallen! 😉

  • @DuncanCalvert
    @DuncanCalvert Жыл бұрын

    Epic! The visuals are incredible. Was it done in post, or do you have those available during the run?

  • @user-rm7er4rd8d
    @user-rm7er4rd8d Жыл бұрын

    I am very love the robot which yours designed,and the vedio is also awesome!!!

Келесі