SLAM-Course - 03 - Bayes Filter (2013/14; Cyrill Stachniss)

Ғылым және технология

Пікірлер: 48

  • @hemantyadav6501
    @hemantyadav65017 жыл бұрын

    the best videos a person can find free of cost

  • @shaunvonermirzo7581
    @shaunvonermirzo75815 жыл бұрын

    Thank you Cyrill, you've made our lives as students easier

  • @Uditsinghparihar
    @Uditsinghparihar5 жыл бұрын

    One of the best lectures I have seen. I earlier watched all the lectures of this series at 1.5x - 2x . And started working on the bot. After some implementations of ROS packages, I am back to again watch the lectures at normal speed and pausing occasionaly. And appreceating more.

  • @sciencetube4574
    @sciencetube45743 жыл бұрын

    This lecture really helped me understand the idea behind the Bayes filter. I had read a little about it and heard a few basic concepts, but this really connected all of the loose ends. Thank you!

  • @jays907
    @jays9073 жыл бұрын

    Thank you for keeping these videos up and even the new video you posted on bayes filter! Very good lectures and make me even want to go to where you're teaching!

  • @jinseoilau2543
    @jinseoilau25435 жыл бұрын

    wonderful courses, thanks!

  • @victorsheverdin3935
    @victorsheverdin39359 ай бұрын

    Thank u!

  • @1volkansezer
    @1volkansezer5 жыл бұрын

    Thanks for the great lecture professor. I would like to make a clarification for 46:04 , where you explain the max-range effect of measurement model. I think the reason of that part is not about the 5m a way obstacle for a 4m range sensor. I think the real reason is: sometimes the sensor may fail to measure an object even it is just in front of it (let say 2m away), and gives us the result of max range (4m), which is a sensor failure. That part models these kind of errors I guess, isn't it?

  • @romagluskin5133
    @romagluskin51338 жыл бұрын

    In the velocity-based model, where we assume that the robot receives the command with parameters (v,w) and executes them for a predefined time interval delta-t, shouldn't we also include in the model some uncertainty about the robot's internal clock ? Or can it just be represented as a scaling term for the uncertainty of executing (v,w) ?

  • @Paarth2000
    @Paarth20007 жыл бұрын

    Amazing lecture series - good distillation of the probabilistic concepts. However a question: why when predicting x(t) using odometry we use a triple transform = initial-rotation-translation-final-rotation. Given that the bayes formulation is inherently recursive , i.e. x(t) => x(t+1) => x(t+2) one would imagine that the second rotation would be naturally the initial part of the next estimation, i.e. in the estimation of x(t+2). Otherwise it appears (naively) that we might end up double counting the second rotation.

  • @wahabfiles6260
    @wahabfiles62604 жыл бұрын

    at 5:50 it is mentioned that if we have sensor bias then previous measurement can help get better estimate. I want to know how because the previous measurement is also taken from the same sensor so it also has the same bias?

  • @GaryPaluk
    @GaryPaluk6 жыл бұрын

    Hi Cyrill... Thanks for your great videos on SLAM and robotics. At 35:58 - Are you basically just saying this is a 2D quaternion using spherical linear interpolation (slerp)?

  • @giwrgos1349
    @giwrgos13495 жыл бұрын

    great lectures

  • @arthurew8523
    @arthurew85233 жыл бұрын

    amazing class

  • @vicalomen
    @vicalomen6 жыл бұрын

    What happend when in the velocity model the w=0? How you can do in that case?

  • @qutibamokadam879
    @qutibamokadam8795 жыл бұрын

    Hiii Dr Hi guys I have a question , how can I add this additional noise term to the final orientation in the velocity model? could you help me ?

  • @SaiManojPrakhya
    @SaiManojPrakhya10 жыл бұрын

    I have a small doubt with respect to Markov assumption to reduce the first complex term to p(Zt / Xt). As you said that having previous observations and control commands helps to get better estimates, then why is it that this assumption is made ?

  • @CyrillStachniss

    @CyrillStachniss

    10 жыл бұрын

    Otherwise we would not end up with such an easy and effective algorithm - and the approximation error can be assumed to be small, especially when eliminating systematic errors beforehand through calibration.

  • @nicolasperez4292
    @nicolasperez42923 жыл бұрын

    at 5:00 I didn't quite get how you applied baye's rule. how are you able to swap out only z_t?

  • @ahmadalghooneh2105
    @ahmadalghooneh21054 жыл бұрын

    thank you

  • @Yanni89-
    @Yanni89-9 жыл бұрын

    When you are talking about the odometry model, does the robot have to make these motions in reality or is that the effective path it will go in the end? If we for instance have a car, that does a lot of curves but the effective path can be summarized as shown in your odometry model in the slides, meaning two rotations and a translation, could the movement still be simplified as that or does the full path traveled with all the different curves have to be taken into account? In that case the model would be very complicated right? This was not entirely clear for me so I thank you for any help :)

  • @CyrillStachniss

    @CyrillStachniss

    9 жыл бұрын

    Yannick M The models describes the intended motion of the robot/car between two time steps. From t to t+1, we consider a simply rigid body transformation, basically from start configuration to end configuration at t+1. But If you chain all commands starting from t=1 ... T, you get a (discretized) trajectory,

  • @Yanni89-

    @Yanni89-

    9 жыл бұрын

    Cyrill Stachniss Thank you for the reply! I got it now :)

  • @sagy90
    @sagy903 жыл бұрын

    where can we see the tutorials of this course and excersises?

  • @rajatalak
    @rajatalak7 жыл бұрын

    At 13:42, shouldn't we assume that the control action u_t depend on the current state x_t? Control can't be oblivious to the system state can it? If so, then knowing u_t we may be able to infer something about x_{t-1} and the two will not be independent. Is this true? and if it is then we can't use independence to ignore u_t in the last equation.

  • @donaldslowik5009

    @donaldslowik5009

    4 жыл бұрын

    Yes, that's the approximation/simplification which he says may or may not be true, at ~13:30. So yes, u_t might reasonably depend on x_{t-1}, so knowledge of u_t would inform as to p(x_{t-1}). But since u_t is informed by x_{t-1}, which only depends on z_{1:t-1}, u_{1:t-1}, it can't provide anymore info about x_{t-1}.

  • @OttoFazzl
    @OttoFazzl6 жыл бұрын

    For autonomous cars we probably should not use rotation-translation-rotation model because cars cannot rotate in place. Therefore, circular motion model as described at 35:18 should be more appropriate.

  • @ilanaizelman3993

    @ilanaizelman3993

    5 жыл бұрын

    That's wrong. my autonomous car rotates in place.

  • @kevinfarrell3003
    @kevinfarrell30038 жыл бұрын

    Hi all, I have a silly Question. Around 27mins in when we are talking about the Odometry Model, why do we measure the translation as the euclidean distance between the two poses? While that does make sense, I thought the Odometry Model meant measuring the rotation of the robot wheels, so I was expecting some formula that included RPM and the wheel radius. I am sure I am missing something silly however. Great lecture btw. Looking forward to watching the rest :)

  • @CyrillStachniss

    @CyrillStachniss

    8 жыл бұрын

    +Kevin Farrell Most robot control systems provide the pose in a robot coordinate frame, so for the prediction step, you need to compute the rigid body transformation between two poses and use a noise model for it. The Rotate-Tranalate-Rotate Model is just one possible choice, you can take others as well.

  • @NitinDhiman
    @NitinDhiman9 жыл бұрын

    I have a doubt regarding Bayes's expansion of bel(x_t) in slide no 4. As per my derivation, denominator should have a term P(Z_t | Z_{1:t-1}, u_{1:t}). I am not able to understand how this term is subsumed in the constant as Z_t is dependent on u_t.

  • @CyrillStachniss

    @CyrillStachniss

    9 жыл бұрын

    The whole denominator sits in the normalization constant.

  • @NitinDhiman

    @NitinDhiman

    9 жыл бұрын

    Thanks for the reply. I am not able to comprehend it. P(Z_t | Z_{1:t-1}, u_{1:t}) is not constant as it is dependent on u_t

  • @Superslimjimmy

    @Superslimjimmy

    9 жыл бұрын

    Nitin Dhiman The original expression is bel(x_t) which is only a function of x_t. It states that bel(x_t) = p(x_t | z_{1:t}, u_{1:t}) which indicates that z_t and u_t are given (i.e. known), so P(Z_t | Z_{1:t-1}, u_{1:t}) would be a constant.

  • @GCOMRacquet
    @GCOMRacquet10 жыл бұрын

    Could someone Explain me what kind of Information the Odometry measurements reports to us? (talking about the measured ones) I mean does the Robot have a internal coordinate System and what relation does it have to the global coordinates? Or does we simply measure everytime from point x_{t-1} = 0 to x_t = 2.5(example) meters and use this information to get the rotations and translation? Just what kind of information ist x_{t-1} and x_t. Since we need in the example x,y,Orientation to calculate the 3 steps the Robot must have some kind of coordinate system or am i tottaly wrong ^^

  • @CyrillStachniss

    @CyrillStachniss

    10 жыл бұрын

    I depends on the platform. Most systems (e.g., a Pioneer) have an internal coordinate system and integrate the motion command within that local frame (which drifts). The pose in this frame is reported to the outside world. Thus in most cases, one uses the internal coordinate frame to compute the relative motion, which is used as the odometry in the methods presented here.

  • @deepduke4188
    @deepduke41883 жыл бұрын

    Could anyone tell me what's the difference between beam-endpoint model and ray-casting model ?

  • @mohammadjavadalipourahmadc9424
    @mohammadjavadalipourahmadc94242 жыл бұрын

    Thanks a lot for the lecture. How can we find the lecture slides of the whole course?

  • @CyrillStachniss

    @CyrillStachniss

    Жыл бұрын

    Send me an email

  • @devyanivarshney1100
    @devyanivarshney11002 жыл бұрын

    I am a bit confused with p(x(t) | x(t-1),u(t)). In a motion model for prediction, at (t-1) , we need u(t-1) and x(t-1) to predict x(t). u(t) is a future move at time (t). I guess, it should be p(x(t) | x(t-1),u(t-1)). Kindly let me know where am I going wrong.

  • @CyrillStachniss

    @CyrillStachniss

    2 жыл бұрын

    It depends how you define u_t. I used the notation from the probabilistic robotics book where u_t leads from x_{t-1} to x_t. I guess you mean the right thing but are probably used to the other notation.

  • @TeoZarkopafilis
    @TeoZarkopafilis5 жыл бұрын

    Why is omega on the denominator at around 32:56 ?

  • @hairynutsack9704

    @hairynutsack9704

    5 жыл бұрын

    v = (Omega)X(r). this is cross product of omega and distance from rotation axis. now r = v/Omega then intially(when orientation is theta), position of r is (v/omega sin(theta)) and after delta t, r is (v/omega sin(theta + omega*delta t). now final robot position is (x',y',Theta') = (x,t,theta)+displacement in delta t = (x,y,theta) + (r final - r initial)

  • @niravc10
    @niravc107 жыл бұрын

    Will you make your assignments public?

  • @CyrillStachniss

    @CyrillStachniss

    7 жыл бұрын

    Yes, the assignments are public, see the Course Website in WS 13/14 taught by myself at Freiburg University. The solutions, however, are not public.

  • @UrbanPretzle

    @UrbanPretzle

    3 жыл бұрын

    Solutions are not public but here are my solutions if you want to take a look or what not. Let me know if you sport anything wrong about them github.com/conorhennessy/SLAM-Course-Solutions

  • @sau002
    @sau0023 жыл бұрын

    Quite unlike other stellar videos that you have published. This is far too abstract! I have no clue what problem we are attempting to solve. You lost me.

Келесі