CppCon 2015: John Farrier “Demystifying Floating Point"
Жүктеу.....
Пікірлер: 31
@soulstudiosmusic8 жыл бұрын
Perplexing subject for a good talk.
@skilz80986 жыл бұрын
The comment about stack overflow is fairly on point.
@CharIie836 жыл бұрын
great talk
@georganatoly66464 жыл бұрын
Very good talk. It wouldn't have occurred to me to group like exponentiation.
@acmdz8 жыл бұрын
Oh, that's really helpful!
@victornoagbodji8 жыл бұрын
great talk : )
@richardcavell6 жыл бұрын
Kudos for the Guru Meditation at 6:15
@Silvertestrun7 ай бұрын
Thank you!
@dascandy8 жыл бұрын
There are the same amount of numbers between 0.5 and 0.25, between 0.25 and 0.125, between 0.125 and 0.0625, ... so if you count the number of floats between 0 and 1, that's a lot more than between 1 and 2. In fact, there are 255 "groups" of numbers, each from N to N*2, each having 8388608 numbers.
@justcurious1940
Жыл бұрын
u are totally correct but i just wanna say that we only have 254 groups for normal values and 1 group for NANs and 1 group for subnormals
@enhex8 жыл бұрын
Why 0.0 to 0.1 have more precision? Is it because every floating point number is unique, and there's a lot of overlaps up to 2^23?
@alexloktionoff6833 Жыл бұрын
Can anybody provide a link to those 50 equations /*test-cases???*/ that must fork same on all IEEE754 machines?
@janasandeep8 жыл бұрын
At 11:08 (slide 24), the binary representation of 1.0e-37 shown here is different from what I got using visual studio: EA1C0802 (little endian). Why so? The latter does not seem to be a denormalized number.
@soulstudiosmusic
8 жыл бұрын
+sandeep jana See section where he mentions the three different types of floating point numbers available in VS.
@hanyouchu46617 жыл бұрын
The difference between float and double for the Kahan version of triangle area is due to input conversion from decimal to binary. If you cast the float input to double, the results are almost identical.
@alexloktionoff6833
Жыл бұрын
oh yea, don't forget to put f or d suffixes to all constants in the code to avoid double rounding!
@ehsanamini85012 жыл бұрын
@4:31 and @8:10 why does -1^0 compute to zero? Shouldn't it be 1? Is it a typo or am I missing something?
@ehsanamini8501
2 жыл бұрын
Oh I get it now. 1 represents a negative number, 0 a positive number.
@RajeshKumar28sep2 жыл бұрын
5:22, for 64 bits its should be 11 bit of exp
@MaherBaba8 жыл бұрын
what u mean?
@Calm_Energy5 жыл бұрын
Units in last place = ulps. This is a measurement I've never seen before. Some compilers can control how rounding works via options.
@alexloktionoff6833
Жыл бұрын
but not all h/w follows bit exact rounding. +/-.5ulp what we can rely...
@Courserasrikanthdrk7 жыл бұрын
ok,,,,,,,,,,,,,the talk has expressed very important maths topics :----------------------------------------------------------------------)
@Xeverous5 жыл бұрын
-ffast-math not mentioned?
@MaherBaba8 жыл бұрын
you see everything don't you
@dascandy8 жыл бұрын
Your 64-bit floats occupy 65 bits.
@pranavjain3905
6 жыл бұрын
the exponent register should have 11 exponent bits
@andik708 жыл бұрын
16:56 'on the CPU math is done exactly and then rounded to give it back to you' did you really say that? Do you really mean that?
@richardcavell
5 жыл бұрын
Theoretically, that’s what happens. Think conceptually.
@UpstreamNL5 жыл бұрын
Confusing talk. This guy is all over the place. Every slide in no way connects to the previous slide.
Пікірлер: 31
Perplexing subject for a good talk.
The comment about stack overflow is fairly on point.
great talk
Very good talk. It wouldn't have occurred to me to group like exponentiation.
Oh, that's really helpful!
great talk : )
Kudos for the Guru Meditation at 6:15
Thank you!
There are the same amount of numbers between 0.5 and 0.25, between 0.25 and 0.125, between 0.125 and 0.0625, ... so if you count the number of floats between 0 and 1, that's a lot more than between 1 and 2. In fact, there are 255 "groups" of numbers, each from N to N*2, each having 8388608 numbers.
@justcurious1940
Жыл бұрын
u are totally correct but i just wanna say that we only have 254 groups for normal values and 1 group for NANs and 1 group for subnormals
Why 0.0 to 0.1 have more precision? Is it because every floating point number is unique, and there's a lot of overlaps up to 2^23?
Can anybody provide a link to those 50 equations /*test-cases???*/ that must fork same on all IEEE754 machines?
At 11:08 (slide 24), the binary representation of 1.0e-37 shown here is different from what I got using visual studio: EA1C0802 (little endian). Why so? The latter does not seem to be a denormalized number.
@soulstudiosmusic
8 жыл бұрын
+sandeep jana See section where he mentions the three different types of floating point numbers available in VS.
The difference between float and double for the Kahan version of triangle area is due to input conversion from decimal to binary. If you cast the float input to double, the results are almost identical.
@alexloktionoff6833
Жыл бұрын
oh yea, don't forget to put f or d suffixes to all constants in the code to avoid double rounding!
@4:31 and @8:10 why does -1^0 compute to zero? Shouldn't it be 1? Is it a typo or am I missing something?
@ehsanamini8501
2 жыл бұрын
Oh I get it now. 1 represents a negative number, 0 a positive number.
5:22, for 64 bits its should be 11 bit of exp
what u mean?
Units in last place = ulps. This is a measurement I've never seen before. Some compilers can control how rounding works via options.
@alexloktionoff6833
Жыл бұрын
but not all h/w follows bit exact rounding. +/-.5ulp what we can rely...
ok,,,,,,,,,,,,,the talk has expressed very important maths topics :----------------------------------------------------------------------)
-ffast-math not mentioned?
you see everything don't you
Your 64-bit floats occupy 65 bits.
@pranavjain3905
6 жыл бұрын
the exponent register should have 11 exponent bits
16:56 'on the CPU math is done exactly and then rounded to give it back to you' did you really say that? Do you really mean that?
@richardcavell
5 жыл бұрын
Theoretically, that’s what happens. Think conceptually.
Confusing talk. This guy is all over the place. Every slide in no way connects to the previous slide.