Fixed-point math is better than floating point (sometimes)

In this video, we're learning about fixed-point: A different method for doing non-integer arithmetic without floats!
Floating point is a ubiquitous standard that works everywhere, but it needs specialised hardware to have any chance of running fast. This can make calculations on low-powered microcontrollers extremely expensive. Fixed-point solves, giving us a method for cheaply computing fractional results with flexible precision!
00:00:00 Intro
00:03:46 Floating point vs fixed point
00:10:07 Fixed point bit representation
00:17:10 Code: Fixed point defines
00:21:30 Getting to the integer and fractional parts
00:24:24 Sign function and representing ints in fixed point
00:32:14 Converting to and from floating point
00:36:23 Addition and subtraction
00:38:27 Multiplication
00:42:20 Division
00:48:52 Rounding operations
00:50:14 Absolute value
00:54:50 Floor
00:58:57 Getting the fractional part
01:02:38 Ceiling
01:05:37 Round
01:14:14 Motivating example: Analog to digital converter readings
01:30:10 Next time: Sines and cosines
=[ 🔗 Links 🔗 ]=
⭐️ Become a patron and get bonus videos! / lowleveljavascript
🗣 Discord: / discord
💻 Github Repo: github.com/lowbyteproductions...

Пікірлер: 107

  • @personguy731
    @personguy73124 күн бұрын

    I think that your implementation of floor is wrong for negative numbers, because for floor you round towards 0 always, whereas floor should round towards -inf. So, for example, floor(-18.2) should be -19, and not -18 as you corrected for. This is also what happens in Python, and what is shown on the wikipedia page for IEEE-754.

  • @LowByteProductions

    @LowByteProductions

    24 күн бұрын

    I looked it up, and you're right. I actually implemented truncate - which ironically is the thing I said I would implement, and then decided to call it floor instead (thinking they were interchangable). Thanks for setting my straight, and proving that rounding is always more complex than you think :D

  • @Omnicypher001

    @Omnicypher001

    22 күн бұрын

    ​@@LowByteProductions you don't need fixed point, you can just do all the math with integers. Print a . wherever you want, when you render the number on the screen.

  • @LowByteProductions

    @LowByteProductions

    22 күн бұрын

    @Omnicypher001 you're describing base-10 fixed point. This video talks about base-2 (binary fixed point), which makes better use of the representation space, and is able to perform operations cheaply by taking advantage of the way computers work.

  • @warvinn

    @warvinn

    22 күн бұрын

    ​@@Omnicypher001 You'd think that would work, but it falls apart as soon as you encounter e.g. multiplication. Let's say you have your number 1000 that you print as 1.000, but now when you do 1.000*1.000 you get 1000*1000=1000000 which you would print as 1000.000. You could use a tuple to keep track where the period needs to go but at that point your probably better off doing it like the video instead.

  • @amalirfan

    @amalirfan

    21 күн бұрын

    ​@@warvinnyeah it is hard to abstract. It still works for smaller scale uses, for example getting percentages, you could do (x * p) / 100. You will have to do the conversions manually, but it is a nice option.

  • @typedeaf
    @typedeaf25 күн бұрын

    All your content is top rate. Love the low level stuff that we dont need to know, but cant sleep w/o knowing.

  • @Casilios
    @Casilios22 күн бұрын

    What a timing: yesterday I decided to look into fixed point numbers because I was having some problems with my floating point rasterizer. This video is immensely helpful with getting a better understanding of fixed point numbers. I'm looking forward to learning about trig functions for this stuff.

  • @caruccio
    @caruccio25 күн бұрын

    Really entertaining video. Thanks!

  • @mattymerr701
    @mattymerr70125 күн бұрын

    The most annoying thing to me is that IEEE754 support in languages usually only ever support the binary case but decimal floating points also covered under IEEE754 are so much more useful even if they are slow. Things suck

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    It probably would have been more successful in its own standard

  • @charlieking7600

    @charlieking7600

    25 күн бұрын

    The worst part of floating point computations is that C and C++ don't provide exactly the same result on different hardware. It's crucial for scientific computation to have constant margin. And any mistake has capability to accumulate.

  • @mattymerr701

    @mattymerr701

    25 күн бұрын

    @@LowByteProductions I think you're very right

  • @mattymerr701

    @mattymerr701

    25 күн бұрын

    @@charlieking7600 afaik that is inherent to binary floating points and is governed by the machine epsilon which isn't consistent. That's why decimal floating points are so useful, they don't have the same issues with error

  • @angeldude101

    @angeldude101

    23 күн бұрын

    Decimal is a terrible base to work with and it's a shame that it's what most of the world uses. If you're going to argue that 1/(2*5) + 2/(2*5) ≠ 3/(2*5), then I can just say that 1/3 + 1/3 ≠ 2/3, because 1/3 = 0.3333, and 0.3333 + 0.3333 = 0.6666 ≠ 0.6667 = 2/3, and if we hadn't standardized on base ten, we would still care a decent amount about thirds, but no one would care about tenths. Computers can't afford to use anything but the objectively simplest possible base, which is two. Inconsistent results is because IEEE wrote the spec too loosely and implementers didn't bother to make sure everything was accurate, instead calling what they got "good enough". This has nothing to do with floating point using binary.

  • @edgeeffect
    @edgeeffect19 күн бұрын

    This video is so good... taking high level concepts that we often think of as a simple, almost atomic, operation and breaking them down to the next lower level. I like to play with assembly language for very simillar reasons.

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    Exactly!

  • @j.r.8176
    @j.r.817618 күн бұрын

    Instantly subscribed!

  • @beyondcatastrophe_
    @beyondcatastrophe_25 күн бұрын

    I think what would have been nice to mention is that floating point is essentially scientific notation, i.e. 12.34 is 1.234e1, just that floats use 2^n instead of 10^n for the exponent, which is where the scaling you mention comes from

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    Certainly - this is probably a lot clearer in the video I made about floating point a few years back. Though of course, part of what makes floats complex is the edge cases where that doesn't apply as smoothly: sub-normals/denormals, infinities, NaNs, etc

  • @aleksikuikka6271

    @aleksikuikka6271

    25 күн бұрын

    That's quite an important intuition. If you said that you calculated something in scientific notation with a fixed number of significant digits, nobody would think there's anything weird or arbitrary about it. There's also probably some argument to be made about the expected error in measurements made from natural processes following a normal distribution, where the error is likely proportional to the scale of the mean. Like if you're measuring a big number, you probably expect the error to be similarly 'big'. The alternate hypothesis would be that the deviation would be smaller the bigger scales you work with, so you'd expect the distribution to get thinner and shorter tailed, which doesn't immediately seem like a natural assumption to me. Software engineering wise, if your hardware has a floating-point unit, I don't think there's any unanimous argument to switch away from using your hardware to the fullest. If you don't know what you're doing with your fixed point numbers, you probably shouldn't be using them, you're probably just adding yourself unnecessary complexity in the best case (e.g., work with strange engineering units, add the logic and possible extra variables to do the calculations etc.), assuming you don't outright lose precision or performance due to the implementation. Whereas, if you do know what you're doing, and you have specific requirements to work with, where fixed point just works better to satisfy those requirements, then by definition you probably should be using that.

  • @Burgo361
    @Burgo36119 күн бұрын

    This was really interesting I might actually try implementing it myself for a bit of fun.

  • @luczeiler2317
    @luczeiler231714 күн бұрын

    Awesome. Subscription well earned!

  • @rogo7330
    @rogo733024 күн бұрын

    struct timespec is a great example of the fixed-point integer number. You have tv_sec, which is just time_t signed integer type, and tv_nsec, which is a signed long type that only purpose of is to represent values from 0 to billion minus 1 (999,999,999) inclusive. With some helper functions you can do very robbust and easy math if you treat tv_nsec just as accumulator that when overflows adds 1 to tv_sec and when it underflows subtracts 1 from tv_sec. Easy, quick, no floats needed. Not all systems even have that kind of precision for timestamps, so having nsec precision is good enough.

  • @JobvanderZwan
    @JobvanderZwan22 күн бұрын

    You know what's also a surprisingly useful algorithm when dealing with fractions if all you have is integers? Bresenham's line algorithm! The whole "drawing a line" thing is a bit of a diversion of the true genius kernel of that algorithm: how to do error-free repeated addition of fractions, and only trigger an action every time you "cross" a whole-number boundary (in the canonical case: drawing a pixel). And all you need is three integers (an accumulator, a numerator and a denominator), integer addition, and an if-statement. Even the lowest-power hardware can do that!

  • @LowByteProductions

    @LowByteProductions

    21 күн бұрын

    Ah yes, I've come across it before when building procedural generation for a roguelike!

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    19 күн бұрын

    I have a hard time to accept that Bresenham is not just calculating with fractions as we did learn in school. Probably because we did not learn to manually calculate with floats.

  • @Optimus6128

    @Optimus6128

    14 күн бұрын

    Also nowadays you can easily do a non Bresenham style with fixed point adds that performs as good if not slightly better. I was suspicious of those conditional jumps in the bresenham for modern CPUs relying on branch prediction and my fixed point implementation was easier to think around so I did use that instead. I would like to do a bresenham again though to performance compare between the too at some point.

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    14 күн бұрын

    @@Optimus6128 I am stuck in the past. GBA or Jaguar. I don’t get why Jaguar hardware uses fixed points for lines, while the later PS1 seems to use Bresenham for edges.

  • @Optimus6128

    @Optimus6128

    14 күн бұрын

    @@ArneChristianRosenfeldt Bressenham could be good for some old hardware. Then there is the previous thing everyone calls DDA, but there are bad and good implementations that anyone calls DDA so I don't know. What I did even in a hardware with ARM at the time the GP32, was to do something that I think people called DDA, but my version would do one division in the beginning of the line which I precalced with reciprocal fixed point MUL. But the difference was, later as I traverse through each pixel, I was just doing an ADD and a SHIFT and nothing else. So through pixel traversal it seemed doing less that Bresenham, but not beforehand.

  • @0x1EGEN
    @0x1EGEN18 күн бұрын

    Personally I loved how easy it is to do fixed point maths using integers. Floats is a complicated format and either needs a lot of code to emulate in software or a lot of silicon to do it in hardware. But for fixed point, all you need is an ALU :)

  • @edgeeffect
    @edgeeffect24 күн бұрын

    Nice that you did this in 32-bit... I've been looking for a "nice" 32-but fixed-point implementation for a long time... I have this idea of building a synthesizer on a network of PIC32s... and, floating point, aint nobody got time for that! ... I had in mind to do this in Zig... because then I could use `comptime` to turn my human readable constants into my chosen fixed-point format. But this is entirely an armchair theoretical project at the moment.

  • @LowByteProductions

    @LowByteProductions

    24 күн бұрын

    Do it! It sounds like an awesome project. (And I love Zig by the way - I have to find a way to get it into the channel soon)

  • @edgeeffect

    @edgeeffect

    19 күн бұрын

    ​@@LowByteProductionsI'm thinking, though, that in the end, I may have to stick to C++ just so that I can have operator overloading... to be able to write my expressions in a "nicer" format.

  • @JamesPerkins
    @JamesPerkins24 күн бұрын

    One nice thing is that fixed point arithmetic gives you exactly the same result on every computer architecture, but floating point often does not... because floating point implementations make different choices with the least significant bits of number representation... not so much during simple arithmetic operations but definitely for reciprocal, trig, exponent and log. Sometimes close is not enough and exact identical results are more useful. Also, sometimes the generality of floating point requires more CPU cycles than equivalent fixed point operations....

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    19 күн бұрын

    This is not true anymore because all modern CPUs expect you to use float vectors 64Bit following IEEE471 or so. Only legacy code on vintage 8087 uses 80 bits. Even MAC is defined up to all bits since 2001 or so. And why would transcendent functions on fixed point not be implemented using Taylor Series?

  • @JamesPerkins

    @JamesPerkins

    19 күн бұрын

    @ArneChristianRosenfeldt Just saying, Ingenic X2100 MIPS, ARM Cortex-A53 and Intel Xeon give slightly different floating point behavior for 32-bit floating point. I do SIMD computer vision algorithm acceleration and those floating point units do not compute exactly the same results under all circumstances.

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    19 күн бұрын

    @@JamesPerkins and this is not due to the compiler? Though, it should not reorder floating point instructions using algebra. Java used to save floats to memory on 8087 to force compliant rounding. If this does not achieve the result, why is there even this option? Isn’t it generally accepted that source code need to compile bit precise to find hacker attempts, and calculations need to also run bit precise to allow game replays on cross-platform games ( and client side calculations which match the server side, unless someone cheated ). Do those processors claim to do IEEE floats? The specs on rounding is already so long. It not only considers reproduction between CPUs, but even best possible results if some stores intermediate results as decimals.

  • @JamesPerkins

    @JamesPerkins

    19 күн бұрын

    @ArneChristianRosenfeldt These are all IEEE 754 32-bit floating implementations. There are two ISAs I write to... the scalar floating point register ISA (traditional) and the vector SIMD. There are small differences in the least significant bits on certain operations. For the scalars, there are also some optional instructions implemented in more exact/slower and less exact/faster forms. Not all rounding modes are available on all architectures (esp. in the embedded architectures, replicating everything Intel does is a huge amount of additional gates). As long as I stick to the most exact and slower scalar instructions and common rounding methods, I'm usually within a least significant bit or two of exactly the same results on all architectures. When you go into the SIMD ISAs (SSE2, NEON, MSA) floating point acts generally similar, but the integer to float and back conversions, rounding mode limitations and incomplete (but faster and less gate) implementations creep in and start to make the results diverge more significantly. Which brings me back to my point... if you write code using fixed point arithmetic and standard integer operations, it's quite easy to write code which creates bit for bit identical results out to the smallest bit, as the integer operations are more consistently defined across the architectures. But it's also a lot more work, requires more careful optimization, and some operations are significantly slower. SSE is scary fast ( clock for clock). Intel must throw a huge amount of gates at that general floating point hardware that MIPS and ARM can't afford. It's quite a luxury.

  • @ArneChristianRosenfeldt

    @ArneChristianRosenfeldt

    18 күн бұрын

    @@JamesPerkins Oh, that long video about rounding. Ah yeah, the argument was about a final conversion to decimal, but the rounding itself had to happen on every division(?) float to float. Ah, no it does not. I guess I have to read that up. I thought that floating point units do this round up even numbers and round down odd numbers for the mantissa. Ah, this may be difficult for division because I think that one algorithm goes from significant bits down to less significant and then back up. But still, we only need one more bit for rounding. For integer we just truncate. Would be nice to have this mode for all float units. I thought that floats give up normalization for small numbers to not have to do too much special operations.

  • @kilwo
    @kilwo23 күн бұрын

    Also, fp_floor for positive numbers is just fp_floor(a+ Half) and negative is Fp_floor(a-Half)

  • @kilwo
    @kilwo23 күн бұрын

    In fp_ceil, why use the fp_frac function. Wouldn’t it be quicker to just AND with the frac mask and check if the value is greater than 0. Given that we don’t actually use the value, just the presence of any bit set would be enough to know it’s got a fractional part.

  • @DMWatchesYoutube
    @DMWatchesYoutube23 күн бұрын

    Any thoughts on posits?

  • @ligius3
    @ligius320 күн бұрын

    You can do sin/cos with your library, but you already know this, just being a bit pedantic. It's the Taylor expansion but it's quite compute-heavy. You can do it without division by using some precomputed polynomials. And there's the preferred way, which you will probably present next. Hopefully it's not lookup tables :)

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    Yep, taylor works well in a lot of cases, though because of the factorial divisors, you end up having to deal with either really big or really small numbers. In a 32 bit integer, to get at least 4 terms, you need to dedicate 19 fractional bits. That's fine in many cases, but if your bit division is more middle of the road, a 1KiB quarter wave lookup table with linear interpolation can get you better results with less computation. The method I'm covering next is CORDIC, which is lesser used in the micro world these days because memory and multiplies are relatively cheap and available, but it works on just adds and shifts and has great precision.

  • @markrosenthal9108
    @markrosenthal910816 күн бұрын

    Yes, decimal arithmetic is essential for exact arithmetic. But... Instead of the extra code for scaled integers or decimal data types in custom or provided libraries, you can just do this: 01 WS-ORDER-TOTAL PIC 9(4)V99 VALUE 40.50. ADD 1.50 TO WS-ORDER-TOTAL Still used in critical systems today and introduced in 1960. So understandable that even an auditor can check it. :-)

  • @LowByteProductions

    @LowByteProductions

    16 күн бұрын

    Awesome! How do I implement digital signal processing on top of this 😁

  • @markrosenthal9108

    @markrosenthal9108

    16 күн бұрын

    @@LowByteProductions Assuming that floating point is "good enough" for signal processing: 01 WS-FREQUENCY-AVERAGE-CHANGE VALUE 40.50 COMP-2. 🙂

  • @Blubb3rbub
    @Blubb3rbub23 күн бұрын

    Would it be worth it to make those functions and macros branch free? Or does the compiler do it already? Is it even possible? Or not worth it?

  • @LowByteProductions

    @LowByteProductions

    21 күн бұрын

    It certainly could be! It depends on the intensity of the workload, and the environment you're running on. Many micros don't have sophisticated branch prediction, so you wouldn't expect to lose too much perf to speculative execution. And of course the branching code is not in vastly different regions, and would likely be in cache either way - so no expected latency there. But the key is always to measure! Intuition is often wrong about these kinds of things.

  • @argbatargbat8645
    @argbatargbat864514 күн бұрын

    What about a video on tips/tricks on how to avoid the floating point issues when doing calculations?

  • @LowByteProductions

    @LowByteProductions

    14 күн бұрын

    Besides the obvious ones (be careful with things like divisions by zero, passing invalid out-of-range values to functions like asinf, etc), I'd say the main thing is being aware, and careful with, the idea that the smallest possible value changes as you move through the range of floating point numbers. For very large numbers, there are relatively few numbers in between each integer. Adding a very tiny number to a very large one can result in no change at all. Edit: just noticed you asked for a video. Maybe one day!

  • @davidjohnston4240
    @davidjohnston424019 күн бұрын

    I've implemented plenty of fixed point arithmetic in DSP data paths in wireless communication chips.

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    I'd love to hear more! Was this on custom ASICs?

  • @davidjohnston4240

    @davidjohnston4240

    17 күн бұрын

    ​@@LowByteProductions Yes. Usually wireless modems for bluetooth and wifi and arcania like hiperlan. The modem used arithmetic over for things like MLSE algorithms. Given a range of inputs from the DACs, you can compute the number of bits of precision that is needed to represent all the information to the end of the computation. Make the fixed point integer and fractional parts that big and you can do the compuation with no loss. That was in the past. I've moved onto cryptography which mostly deals with finite field arithmetic so doesn't use fixed point. The implementations use integers (representing powers of polynomials in extension fields of GF(2)) but the security analysis uses huge floating point values (E.G. 4096 digits) in order to measure tiny biases in bit probabilities. Fixed point, Floating Point, GF, rationals or integers - use what the application is calling for.

  • @faust-cr3jk
    @faust-cr3jk16 күн бұрын

    When you use fixed point, usually your main objective is keeping your resolution as small as possible. Therefore dedicating a large number of bit to the integer part seems wrong to me. What I usuall do is dedicating one bit for sign (if any), one bit for integer part and all remaining bits for fractional parts. To do so, you need to normalise all values first. Furthermore, I found that 16 bits for fractional part is more then enough. This is why fixed point in FPGAs uses typically 18 bits.

  • @fresnik
    @fresnik19 күн бұрын

    Not that there's an error in the code, but at 1:05:00 it looks like you accidentally replaced the fp_ceil function, so the test case for fp_ceil for whole numbers is actually never calling fp_ceil(), just converting a float to fp and back again.

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    🤦‍♂️

  • @skilz8098
    @skilz809825 күн бұрын

    This is a really nice demonstration by example, and it does have great utility. However, there is one vital part to any type of mathematical or arithmetic library especially when it is being evaluated within a computational framework, context or domain especially within the integer domain, and that is integer division in regard to its remainder as opposed to just the division itself. No such library is complete without having the ability to perform the modulus operator. Not all but many languages use % to represent this type of operation. It would be nice to see a follow up video extending this library to include such a common operation. Even though the modulus operator itself is fairly considered an elementary or basic operation or operator, its implementation is complex enough that it would almost warrant its own separate video. Why do I mention this? It's quite simple. If one wants to use this as an underlying math library and wants to extend this into using it within other domains such as with performing or evaluating trigonometric functions such as sine, cosine, tangent, exponential functions such as e^n or even logarithmic functions as well as extending into other types of number systems such as in various vector spaces, particularly but not limited to the complex numbers; having the modulus operator as being an already well defined and operational operator between two operands is vital into performing most other complex types. In simple terms, the modulus operator (%) is just as important or significant as other operators such as +, -, *, /, ^, root (exp, rad). And this is just the arithmetic half, there is still the logical half of operators. Other than that, great video!

  • @_bxffour
    @_bxffour25 күн бұрын

    🎉

  • @terohannula30
    @terohannula3017 күн бұрын

    Haven't watched whole video yet, but at 43:30, shouldn't argument "a" be converted to xl type first, and then shifted. edit. ah good it got fixed pretty soon in the video 😄

  • @johncochran8497
    @johncochran849723 күн бұрын

    The issues with floating point vs fixed point is quite simple. Floating point - Why the hell are you looking at those digits, you ought to damn well know that format doesn't support that many significant digits. Fixed point - Why the hell are you looking at those digits, you ought to damn well know that your data doesn't justify that many significant digits. To illustrate, the vast majority of numbers you manipulate on a computer are actually approximations of some other non-representable exact value. Fixed point suffers from what's called "false precision". To illustrate, I'll calculate the circumference of a circle with a diameter of 123. I'll do it twice. Once with a fixed point decimal format with 5 integer digits and 5 fractional digits. Again, with a floating point format with 8 mantissa digits and an exponent from -49 to 49. So we have PI * 123. Let's see what happens: Fixed point 123 * 3.14159 = 386.41557 Floating point 123 * 3.1415927 = 386.41590 Actual value to 10 digits = 386.4158964 The thing to notice is that the fixed point value's last 2 digits are WRONG. They are wrong even though the multiplication occurred with no rounding or overflow. The reason for the error is as I said earlier, most numbers manipulated by computers are approximations of some other non-representable exact value. In this case, the approximation for pi only had 6 significant figures and as such, you can't expect more than 6 figures in the result to be correct. For the floating point case, the approximation for pi had 8 significant figures and as such, its result is correct to 8 places. False precision is a definite problem with fixed point math. And it's a rather insidious problem since the actual mathematical operations are frequently done with no overflow or rounding. But you can't trust your results for any more digits than the smallest number of digits used for your inputs or that of any intermediate results. But with floating point, the number of significant digits remains relatively constant.

  • @ashelkby
    @ashelkby25 күн бұрын

    Actually 10011100 is -100 is two's complement representation.

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    Ah you're right, not sure what happened there

  • @misterkite
    @misterkite23 күн бұрын

    The quickest way I use to explain fixed point is instead of $4.20, you have 420 cents.. it's obvious those are the same even though 4.2 != 420

  • @LowByteProductions

    @LowByteProductions

    23 күн бұрын

    Yes, base 10 fixed point is really intuitive!

  • @rolandzfolyfe8360
    @rolandzfolyfe836022 күн бұрын

    1:27:20 been there, done that

  • @doodocina
    @doodocina19 күн бұрын

    1:21:26 the compiler does this automatically, lol...

  • @notnullnotvoid
    @notnullnotvoid21 күн бұрын

    Surprisingly, integer multiplication and division are generally slower than floating point multiplication and division on modern x86/x64 CPUs! I have no idea why as I'm not a hardware guy, I just spend too much time reading instruction tables.

  • @ethandavis7310

    @ethandavis7310

    18 күн бұрын

    fewer bits to multiply in float. rest are just added

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    Not sure I'd be able to say why either, but it could have something to do with there being quite a lot more floating point arithmetic stages in the CPU pipeline of a modern processor than there are integer ops 🤔

  • @Optimus6128

    @Optimus6128

    14 күн бұрын

    Casey Muratori recently was asked about this in a Q&A. Someone asked even if the bits are the same between let's say a 32bit integer and a float, there are differences in cycles. Casey replied that he is not a hardware expert so he doesn't know for sure, but he said it could be that different CPUs might dedicate more or less wafer space for the integer or floating point, like it's a business decision where they decide what to cut and where to dedicate more circuit or something.

  • @user-vi3it8sy2d
    @user-vi3it8sy2d25 күн бұрын

    😀

  • @weicco
    @weicco25 күн бұрын

    Old trick. Do not use decimals but multiple the value so you get rid of decimals. Works with weight and money calculation at least.

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    Absolutely - way older than floating point and much more deterministic

  • @weicco

    @weicco

    25 күн бұрын

    Of course there is a down side to it. In bookkeeping and banking software we want to use 6 decimals at least. So 32bit number get small quite fast. Luckily almost everyone has 64bit machines these days so this not an issue anymore.

  • @Girugi

    @Girugi

    25 күн бұрын

    That trick only works if you do any real math. So not really a solid solution for anything but very simple stuff. 0.001 * 0.5 = 0.0005 (10*500)/10000 = 5 != 0.0005

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    All the complex systems built on DSPs or FPGAs would beg to differ(radar, rockets, phased arrays, etc)

  • @Girugi

    @Girugi

    25 күн бұрын

    @@LowByteProductions well, true, you just need to apply the devision by the decimal offset of one of the factors after every multiplication. But if you devide by a value like this you have to the multiply by the decimal offset to keep it in sync... Not sure if that would hold up in all cases and would be very easy to run out of bits.

  • @MrMadzina
    @MrMadzina25 күн бұрын

    for fp_abs why not just return abs of a? return abs(a); seems to work fine in C#: public FixedPoint Abs() { return new FixedPoint(Math.Abs(Value)); }

  • @LowByteProductions

    @LowByteProductions

    25 күн бұрын

    Nice! The reason I didn't use it in the video is because this implementation allows the user to provide the integer type. The C library absolute functions are type dependant, so it would go against that aspect.

  • @MrMadzina

    @MrMadzina

    25 күн бұрын

    Also in C# Math.Floor(-18.2f) returns -19

  • @StarryNightSky587
    @StarryNightSky58725 күн бұрын

    *IEEE 754 entered the chat*

  • @redoktopus3047
    @redoktopus304718 күн бұрын

    One day we'll get hardware support for posits and then all of this will be solved

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    I'm definitely no expert on posits, but from a hardware point of view, I think they'd be at least as complex as floats. I could be totally off base though

  • @redoktopus3047

    @redoktopus3047

    17 күн бұрын

    @@LowByteProductions they would be complicated for sure, but i think they'd be slightly simpler than floats. but their use for programming is where i think their potential is. right now they can only be simulated in software so they are slow. floats are something i hope we most past in the next 10 years.

  • @Matt2010
    @Matt201018 күн бұрын

    For FFT, Floating point is way better, be prepared to wait a longer time with fixed point.

  • @LowByteProductions

    @LowByteProductions

    17 күн бұрын

    What about the FFT algorithm would make floating point intrinsically faster?

  • @flameofthephoenix8395
    @flameofthephoenix839511 күн бұрын

    0:05 Sometimes? What is that supposed to mean? Fixed point is always better 100% of the time.

  • @Antagon666
    @Antagon66622 күн бұрын

    True heroes use fractions and or binary coded decimal 😅

  • @LowByteProductions

    @LowByteProductions

    21 күн бұрын

    🫡

  • @sjswitzer1
    @sjswitzer118 күн бұрын

    Slide rules

  • @LowByteProductions

    @LowByteProductions

    18 күн бұрын

    I love playgrounds as much as the next guy, but what does it have to do with fixed point math?

  • @sjswitzer1

    @sjswitzer1

    18 күн бұрын

    @@LowByteProductions with sliderules you maintain the decimal point implicitly (in your mind) much as the binary point is implicit in fixed-point math.

  • @LowByteProductions

    @LowByteProductions

    18 күн бұрын

    I know, I was just messing with you 😄

  • @eadwacer524
    @eadwacer52424 күн бұрын

    From the first time I read about fixed-point for the 286 in an old DOS book I've always liked it more than floats, I think it's going to make a comeback after decades of FPU!

  • @LowByteProductions

    @LowByteProductions

    24 күн бұрын

    In some spheres, it never went away!