Low Byte Productions

Low Byte Productions

Low Byte Productions is a KZread channel goes deep into the heart of low level programming - messing with ones and zeros.

patreon: www.patreon.com/lowleveljavascript
mailing list: tinyletter.com/lowleveljavascript
twitter: @lowleveljs
reddit: reddit.com/r/lowleveljavascript
discord: discord.gg/FPWaVgk

Turning Pixels Into Waves

Turning Pixels Into Waves

Пікірлер

  • @aymaneeljahrani2280
    @aymaneeljahrani228014 минут бұрын

    I’m queen on your videos !

  • @c2vi_dev
    @c2vi_dev9 сағат бұрын

    plz continue this series!!! It was an excellent learning resource to get into kernel inner workings so far and I think for me most of the things clicket.

  • @luczeiler2317
    @luczeiler23172 күн бұрын

    Awesome. Subscription well earned!

  • @argbatargbat8645
    @argbatargbat86452 күн бұрын

    What about a video on tips/tricks on how to avoid the floating point issues when doing calculations?

  • @LowByteProductions
    @LowByteProductions2 күн бұрын

    Besides the obvious ones (be careful with things like divisions by zero, passing invalid out-of-range values to functions like asinf, etc), I'd say the main thing is being aware, and careful with, the idea that the smallest possible value changes as you move through the range of floating point numbers. For very large numbers, there are relatively few numbers in between each integer. Adding a very tiny number to a very large one can result in no change at all. Edit: just noticed you asked for a video. Maybe one day!

  • @manuelsuarez7521
    @manuelsuarez75212 күн бұрын

    ammazing! thanks

  • @dineshram2156
    @dineshram21563 күн бұрын

    share the source to code to program using inbuilt stlink

  • @LowByteProductions
    @LowByteProductions3 күн бұрын

    It's in the repo

  • @faust-cr3jk
    @faust-cr3jk4 күн бұрын

    When you use fixed point, usually your main objective is keeping your resolution as small as possible. Therefore dedicating a large number of bit to the integer part seems wrong to me. What I usuall do is dedicating one bit for sign (if any), one bit for integer part and all remaining bits for fractional parts. To do so, you need to normalise all values first. Furthermore, I found that 16 bits for fractional part is more then enough. This is why fixed point in FPGAs uses typically 18 bits.

  • @markrosenthal9108
    @markrosenthal91084 күн бұрын

    Yes, decimal arithmetic is essential for exact arithmetic. But... Instead of the extra code for scaled integers or decimal data types in custom or provided libraries, you can just do this: 01 WS-ORDER-TOTAL PIC 9(4)V99 VALUE 40.50. ADD 1.50 TO WS-ORDER-TOTAL Still used in critical systems today and introduced in 1960. So understandable that even an auditor can check it. :-)

  • @LowByteProductions
    @LowByteProductions4 күн бұрын

    Awesome! How do I implement digital signal processing on top of this 😁

  • @markrosenthal9108
    @markrosenthal91084 күн бұрын

    @@LowByteProductions Assuming that floating point is "good enough" for signal processing: 01 WS-FREQUENCY-AVERAGE-CHANGE VALUE 40.50 COMP-2. 🙂

  • @terohannula30
    @terohannula305 күн бұрын

    Haven't watched whole video yet, but at 43:30, shouldn't argument "a" be converted to xl type first, and then shifted. edit. ah good it got fixed pretty soon in the video 😄

  • @svenvandevelde1
    @svenvandevelde15 күн бұрын

    You're either Dutch or Belgian :). I think Dutch. Nederlands?

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    Nee, ik ben Brits. Maar ik woon al heel lang in Nederland - misschien kan je dat in mijn accent horen 😉

  • @Matt2010
    @Matt20106 күн бұрын

    For FFT, Floating point is way better, be prepared to wait a longer time with fixed point.

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    What about the FFT algorithm would make floating point intrinsically faster?

  • @0x1EGEN
    @0x1EGEN6 күн бұрын

    Personally I loved how easy it is to do fixed point maths using integers. Floats is a complicated format and either needs a lot of code to emulate in software or a lot of silicon to do it in hardware. But for fixed point, all you need is an ALU :)

  • @j.r.8176
    @j.r.81766 күн бұрын

    Instantly subscribed!

  • @redoktopus3047
    @redoktopus30476 күн бұрын

    One day we'll get hardware support for posits and then all of this will be solved

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    I'm definitely no expert on posits, but from a hardware point of view, I think they'd be at least as complex as floats. I could be totally off base though

  • @redoktopus3047
    @redoktopus30475 күн бұрын

    @@LowByteProductions they would be complicated for sure, but i think they'd be slightly simpler than floats. but their use for programming is where i think their potential is. right now they can only be simulated in software so they are slow. floats are something i hope we most past in the next 10 years.

  • @sjswitzer1
    @sjswitzer16 күн бұрын

    Slide rules

  • @LowByteProductions
    @LowByteProductions6 күн бұрын

    I love playgrounds as much as the next guy, but what does it have to do with fixed point math?

  • @sjswitzer1
    @sjswitzer16 күн бұрын

    @@LowByteProductions with sliderules you maintain the decimal point implicitly (in your mind) much as the binary point is implicit in fixed-point math.

  • @LowByteProductions
    @LowByteProductions6 күн бұрын

    I know, I was just messing with you 😄

  • @Burgo361
    @Burgo3616 күн бұрын

    This was really interesting I might actually try implementing it myself for a bit of fun.

  • @davidjohnston4240
    @davidjohnston42407 күн бұрын

    I've implemented plenty of fixed point arithmetic in DSP data paths in wireless communication chips.

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    I'd love to hear more! Was this on custom ASICs?

  • @davidjohnston4240
    @davidjohnston42405 күн бұрын

    ​@@LowByteProductions Yes. Usually wireless modems for bluetooth and wifi and arcania like hiperlan. The modem used arithmetic over for things like MLSE algorithms. Given a range of inputs from the DACs, you can compute the number of bits of precision that is needed to represent all the information to the end of the computation. Make the fixed point integer and fractional parts that big and you can do the compuation with no loss. That was in the past. I've moved onto cryptography which mostly deals with finite field arithmetic so doesn't use fixed point. The implementations use integers (representing powers of polynomials in extension fields of GF(2)) but the security analysis uses huge floating point values (E.G. 4096 digits) in order to measure tiny biases in bit probabilities. Fixed point, Floating Point, GF, rationals or integers - use what the application is calling for.

  • @BigA1
    @BigA17 күн бұрын

    Feel like I want to say a lot - but will just say 'Well Done'

  • @fresnik
    @fresnik7 күн бұрын

    Not that there's an error in the code, but at 1:05:00 it looks like you accidentally replaced the fp_ceil function, so the test case for fp_ceil for whole numbers is actually never calling fp_ceil(), just converting a float to fp and back again.

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    🤦‍♂️

  • @doodocina
    @doodocina7 күн бұрын

    1:21:26 the compiler does this automatically, lol...

  • @edgeeffect
    @edgeeffect7 күн бұрын

    This video is so good... taking high level concepts that we often think of as a simple, almost atomic, operation and breaking them down to the next lower level. I like to play with assembly language for very simillar reasons.

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    Exactly!

  • @ligius3
    @ligius38 күн бұрын

    You can do sin/cos with your library, but you already know this, just being a bit pedantic. It's the Taylor expansion but it's quite compute-heavy. You can do it without division by using some precomputed polynomials. And there's the preferred way, which you will probably present next. Hopefully it's not lookup tables :)

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    Yep, taylor works well in a lot of cases, though because of the factorial divisors, you end up having to deal with either really big or really small numbers. In a 32 bit integer, to get at least 4 terms, you need to dedicate 19 fractional bits. That's fine in many cases, but if your bit division is more middle of the road, a 1KiB quarter wave lookup table with linear interpolation can get you better results with less computation. The method I'm covering next is CORDIC, which is lesser used in the micro world these days because memory and multiplies are relatively cheap and available, but it works on just adds and shifts and has great precision.

  • @8BitRetroJournal
    @8BitRetroJournal9 күн бұрын

    So I literally just spent the past two weeks implementing the base-10 fixed-point math in an emulator I wrote. The emulator was something I created during the pandemic from and old interpreter I had written in the late 80s on an early 80s 32-bit machine. I guess emulator is a bit misleading as this is a ROM emulator (i.e. it doesn't emulate the hardware of the machine, but rather the software). You can find the integer only version by searching for ZXSimulator on the web (the ROM emulator is for the ZX81 that is itself implemented on a 32-bit 80s system called the Sinclair QL -- yes, nesting of old computers but hey, it's a hobby). The original interpreter was integer only to keep it fast (it was meant as a scripting language). Also, the 80s 32-bit machine it was implemented on had a limited C compiler with no floating point support (i.e. Small C). It does provide an add-on floating point library but it's a bit wonky as it uses function calls and a 3D 16-bit int array data structure (so 48 bits) for holding the floating point values...so it's going to be very slow. The scale factor is 100 and you multiply and divide by it, same as you did above with left and right shifts, it's just a bit slower since they are math and not bit operations.. Btw, since I don't have the XL type (32 bits is all), there is a way to fix the overflow issue for multiplication (I haven't figured out what to do about division though).. You can break up the whole number and fraction parts (for me, using / and %) and then this is your multiplication formula: a*b1 + ((a2*b2)/100) + (a2*b1); where a represents the entire number (upscaled by 100), b1 represents only the whole number part of b (gotten by / 100), and a2 & b2 represents the fraction part (gotten by % 100). This will increase the range of numbers (i.e. size) you can multiply (although at a cost of speed).

  • @notnullnotvoid
    @notnullnotvoid9 күн бұрын

    Surprisingly, integer multiplication and division are generally slower than floating point multiplication and division on modern x86/x64 CPUs! I have no idea why as I'm not a hardware guy, I just spend too much time reading instruction tables.

  • @ethandavis7310
    @ethandavis73106 күн бұрын

    fewer bits to multiply in float. rest are just added

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    Not sure I'd be able to say why either, but it could have something to do with there being quite a lot more floating point arithmetic stages in the CPU pipeline of a modern processor than there are integer ops 🤔

  • @Optimus6128
    @Optimus61282 күн бұрын

    Casey Muratori recently was asked about this in a Q&A. Someone asked even if the bits are the same between let's say a 32bit integer and a float, there are differences in cycles. Casey replied that he is not a hardware expert so he doesn't know for sure, but he said it could be that different CPUs might dedicate more or less wafer space for the integer or floating point, like it's a business decision where they decide what to cut and where to dedicate more circuit or something.

  • @aymaneeljahrani2280
    @aymaneeljahrani22809 күн бұрын

    My fingers ached me to code haha . Keep it up !

  • @sandiguha
    @sandiguha9 күн бұрын

    Can you please tell me what extension are you using for C? Yours is showing code hints, mine doesn't. I am using standard C/C++ extension from MS

  • @LowByteProductions
    @LowByteProductions5 күн бұрын

    I'm using that extension, but you usually need to set up a confirguation file for the project that points it to the compiler and compiler type. If your code is dependant on defines then you can also specify those in that file to get proper completion.

  • @rolandzfolyfe8360
    @rolandzfolyfe836010 күн бұрын

    1:27:20 been there, done that

  • @JobvanderZwan
    @JobvanderZwan10 күн бұрын

    You know what's also a surprisingly useful algorithm when dealing with fractions if all you have is integers? Bresenham's line algorithm! The whole "drawing a line" thing is a bit of a diversion of the true genius kernel of that algorithm: how to do error-free repeated addition of fractions, and only trigger an action every time you "cross" a whole-number boundary (in the canonical case: drawing a pixel). And all you need is three integers (an accumulator, a numerator and a denominator), integer addition, and an if-statement. Even the lowest-power hardware can do that!

  • @LowByteProductions
    @LowByteProductions9 күн бұрын

    Ah yes, I've come across it before when building procedural generation for a roguelike!

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt7 күн бұрын

    I have a hard time to accept that Bresenham is not just calculating with fractions as we did learn in school. Probably because we did not learn to manually calculate with floats.

  • @Optimus6128
    @Optimus61282 күн бұрын

    Also nowadays you can easily do a non Bresenham style with fixed point adds that performs as good if not slightly better. I was suspicious of those conditional jumps in the bresenham for modern CPUs relying on branch prediction and my fixed point implementation was easier to think around so I did use that instead. I would like to do a bresenham again though to performance compare between the too at some point.

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt2 күн бұрын

    @@Optimus6128 I am stuck in the past. GBA or Jaguar. I don’t get why Jaguar hardware uses fixed points for lines, while the later PS1 seems to use Bresenham for edges.

  • @Optimus6128
    @Optimus61282 күн бұрын

    @@ArneChristianRosenfeldt Bressenham could be good for some old hardware. Then there is the previous thing everyone calls DDA, but there are bad and good implementations that anyone calls DDA so I don't know. What I did even in a hardware with ARM at the time the GP32, was to do something that I think people called DDA, but my version would do one division in the beginning of the line which I precalced with reciprocal fixed point MUL. But the difference was, later as I traverse through each pixel, I was just doing an ADD and a SHIFT and nothing else. So through pixel traversal it seemed doing less that Bresenham, but not beforehand.

  • @Antagon666
    @Antagon66610 күн бұрын

    True heroes use fractions and or binary coded decimal 😅

  • @LowByteProductions
    @LowByteProductions9 күн бұрын

    🫡

  • @Casilios
    @Casilios10 күн бұрын

    What a timing: yesterday I decided to look into fixed point numbers because I was having some problems with my floating point rasterizer. This video is immensely helpful with getting a better understanding of fixed point numbers. I'm looking forward to learning about trig functions for this stuff.

  • @johncochran8497
    @johncochran849711 күн бұрын

    The issues with floating point vs fixed point is quite simple. Floating point - Why the hell are you looking at those digits, you ought to damn well know that format doesn't support that many significant digits. Fixed point - Why the hell are you looking at those digits, you ought to damn well know that your data doesn't justify that many significant digits. To illustrate, the vast majority of numbers you manipulate on a computer are actually approximations of some other non-representable exact value. Fixed point suffers from what's called "false precision". To illustrate, I'll calculate the circumference of a circle with a diameter of 123. I'll do it twice. Once with a fixed point decimal format with 5 integer digits and 5 fractional digits. Again, with a floating point format with 8 mantissa digits and an exponent from -49 to 49. So we have PI * 123. Let's see what happens: Fixed point 123 * 3.14159 = 386.41557 Floating point 123 * 3.1415927 = 386.41590 Actual value to 10 digits = 386.4158964 The thing to notice is that the fixed point value's last 2 digits are WRONG. They are wrong even though the multiplication occurred with no rounding or overflow. The reason for the error is as I said earlier, most numbers manipulated by computers are approximations of some other non-representable exact value. In this case, the approximation for pi only had 6 significant figures and as such, you can't expect more than 6 figures in the result to be correct. For the floating point case, the approximation for pi had 8 significant figures and as such, its result is correct to 8 places. False precision is a definite problem with fixed point math. And it's a rather insidious problem since the actual mathematical operations are frequently done with no overflow or rounding. But you can't trust your results for any more digits than the smallest number of digits used for your inputs or that of any intermediate results. But with floating point, the number of significant digits remains relatively constant.

  • @DMWatchesYoutube
    @DMWatchesYoutube11 күн бұрын

    Any thoughts on posits?

  • @Blubb3rbub
    @Blubb3rbub11 күн бұрын

    Would it be worth it to make those functions and macros branch free? Or does the compiler do it already? Is it even possible? Or not worth it?

  • @LowByteProductions
    @LowByteProductions9 күн бұрын

    It certainly could be! It depends on the intensity of the workload, and the environment you're running on. Many micros don't have sophisticated branch prediction, so you wouldn't expect to lose too much perf to speculative execution. And of course the branching code is not in vastly different regions, and would likely be in cache either way - so no expected latency there. But the key is always to measure! Intuition is often wrong about these kinds of things.

  • @kilwo
    @kilwo11 күн бұрын

    Also, fp_floor for positive numbers is just fp_floor(a+ Half) and negative is Fp_floor(a-Half)

  • @kilwo
    @kilwo11 күн бұрын

    In fp_ceil, why use the fp_frac function. Wouldn’t it be quicker to just AND with the frac mask and check if the value is greater than 0. Given that we don’t actually use the value, just the presence of any bit set would be enough to know it’s got a fractional part.

  • @misterkite
    @misterkite11 күн бұрын

    The quickest way I use to explain fixed point is instead of $4.20, you have 420 cents.. it's obvious those are the same even though 4.2 != 420

  • @LowByteProductions
    @LowByteProductions11 күн бұрын

    Yes, base 10 fixed point is really intuitive!

  • @JamesPerkins
    @JamesPerkins11 күн бұрын

    One nice thing is that fixed point arithmetic gives you exactly the same result on every computer architecture, but floating point often does not... because floating point implementations make different choices with the least significant bits of number representation... not so much during simple arithmetic operations but definitely for reciprocal, trig, exponent and log. Sometimes close is not enough and exact identical results are more useful. Also, sometimes the generality of floating point requires more CPU cycles than equivalent fixed point operations....

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt7 күн бұрын

    This is not true anymore because all modern CPUs expect you to use float vectors 64Bit following IEEE471 or so. Only legacy code on vintage 8087 uses 80 bits. Even MAC is defined up to all bits since 2001 or so. And why would transcendent functions on fixed point not be implemented using Taylor Series?

  • @JamesPerkins
    @JamesPerkins7 күн бұрын

    @ArneChristianRosenfeldt Just saying, Ingenic X2100 MIPS, ARM Cortex-A53 and Intel Xeon give slightly different floating point behavior for 32-bit floating point. I do SIMD computer vision algorithm acceleration and those floating point units do not compute exactly the same results under all circumstances.

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt7 күн бұрын

    @@JamesPerkins and this is not due to the compiler? Though, it should not reorder floating point instructions using algebra. Java used to save floats to memory on 8087 to force compliant rounding. If this does not achieve the result, why is there even this option? Isn’t it generally accepted that source code need to compile bit precise to find hacker attempts, and calculations need to also run bit precise to allow game replays on cross-platform games ( and client side calculations which match the server side, unless someone cheated ). Do those processors claim to do IEEE floats? The specs on rounding is already so long. It not only considers reproduction between CPUs, but even best possible results if some stores intermediate results as decimals.

  • @JamesPerkins
    @JamesPerkins7 күн бұрын

    @ArneChristianRosenfeldt These are all IEEE 754 32-bit floating implementations. There are two ISAs I write to... the scalar floating point register ISA (traditional) and the vector SIMD. There are small differences in the least significant bits on certain operations. For the scalars, there are also some optional instructions implemented in more exact/slower and less exact/faster forms. Not all rounding modes are available on all architectures (esp. in the embedded architectures, replicating everything Intel does is a huge amount of additional gates). As long as I stick to the most exact and slower scalar instructions and common rounding methods, I'm usually within a least significant bit or two of exactly the same results on all architectures. When you go into the SIMD ISAs (SSE2, NEON, MSA) floating point acts generally similar, but the integer to float and back conversions, rounding mode limitations and incomplete (but faster and less gate) implementations creep in and start to make the results diverge more significantly. Which brings me back to my point... if you write code using fixed point arithmetic and standard integer operations, it's quite easy to write code which creates bit for bit identical results out to the smallest bit, as the integer operations are more consistently defined across the architectures. But it's also a lot more work, requires more careful optimization, and some operations are significantly slower. SSE is scary fast ( clock for clock). Intel must throw a huge amount of gates at that general floating point hardware that MIPS and ARM can't afford. It's quite a luxury.

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt6 күн бұрын

    @@JamesPerkins Oh, that long video about rounding. Ah yeah, the argument was about a final conversion to decimal, but the rounding itself had to happen on every division(?) float to float. Ah, no it does not. I guess I have to read that up. I thought that floating point units do this round up even numbers and round down odd numbers for the mantissa. Ah, this may be difficult for division because I think that one algorithm goes from significant bits down to less significant and then back up. But still, we only need one more bit for rounding. For integer we just truncate. Would be nice to have this mode for all float units. I thought that floats give up normalization for small numbers to not have to do too much special operations.

  • @edgeeffect
    @edgeeffect12 күн бұрын

    Nice that you did this in 32-bit... I've been looking for a "nice" 32-but fixed-point implementation for a long time... I have this idea of building a synthesizer on a network of PIC32s... and, floating point, aint nobody got time for that! ... I had in mind to do this in Zig... because then I could use `comptime` to turn my human readable constants into my chosen fixed-point format. But this is entirely an armchair theoretical project at the moment.

  • @LowByteProductions
    @LowByteProductions12 күн бұрын

    Do it! It sounds like an awesome project. (And I love Zig by the way - I have to find a way to get it into the channel soon)

  • @edgeeffect
    @edgeeffect7 күн бұрын

    ​@@LowByteProductionsI'm thinking, though, that in the end, I may have to stick to C++ just so that I can have operator overloading... to be able to write my expressions in a "nicer" format.

  • @personguy731
    @personguy73112 күн бұрын

    I think that your implementation of floor is wrong for negative numbers, because for floor you round towards 0 always, whereas floor should round towards -inf. So, for example, floor(-18.2) should be -19, and not -18 as you corrected for. This is also what happens in Python, and what is shown on the wikipedia page for IEEE-754.

  • @LowByteProductions
    @LowByteProductions12 күн бұрын

    I looked it up, and you're right. I actually implemented truncate - which ironically is the thing I said I would implement, and then decided to call it floor instead (thinking they were interchangable). Thanks for setting my straight, and proving that rounding is always more complex than you think :D

  • @Omnicypher001
    @Omnicypher00110 күн бұрын

    ​@@LowByteProductions you don't need fixed point, you can just do all the math with integers. Print a . wherever you want, when you render the number on the screen.

  • @LowByteProductions
    @LowByteProductions10 күн бұрын

    @Omnicypher001 you're describing base-10 fixed point. This video talks about base-2 (binary fixed point), which makes better use of the representation space, and is able to perform operations cheaply by taking advantage of the way computers work.

  • @warvinn
    @warvinn10 күн бұрын

    ​@@Omnicypher001 You'd think that would work, but it falls apart as soon as you encounter e.g. multiplication. Let's say you have your number 1000 that you print as 1.000, but now when you do 1.000*1.000 you get 1000*1000=1000000 which you would print as 1000.000. You could use a tuple to keep track where the period needs to go but at that point your probably better off doing it like the video instead.

  • @amalirfan
    @amalirfan9 күн бұрын

    ​@@warvinnyeah it is hard to abstract. It still works for smaller scale uses, for example getting percentages, you could do (x * p) / 100. You will have to do the conversions manually, but it is a nice option.

  • @rogo7330
    @rogo733012 күн бұрын

    struct timespec is a great example of the fixed-point integer number. You have tv_sec, which is just time_t signed integer type, and tv_nsec, which is a signed long type that only purpose of is to represent values from 0 to billion minus 1 (999,999,999) inclusive. With some helper functions you can do very robbust and easy math if you treat tv_nsec just as accumulator that when overflows adds 1 to tv_sec and when it underflows subtracts 1 from tv_sec. Easy, quick, no floats needed. Not all systems even have that kind of precision for timestamps, so having nsec precision is good enough.

  • @eadwacer524
    @eadwacer52412 күн бұрын

    From the first time I read about fixed-point for the 286 in an old DOS book I've always liked it more than floats, I think it's going to make a comeback after decades of FPU!

  • @LowByteProductions
    @LowByteProductions12 күн бұрын

    In some spheres, it never went away!

  • @Gurpreegill1962
    @Gurpreegill196212 күн бұрын

    Can you please link White Quark's USB twitch stream link?

  • @typedeaf
    @typedeaf13 күн бұрын

    All your content is top rate. Love the low level stuff that we dont need to know, but cant sleep w/o knowing.

  • @user-vi3it8sy2d
    @user-vi3it8sy2d13 күн бұрын

    😀

  • @skilz8098
    @skilz809813 күн бұрын

    This is a really nice demonstration by example, and it does have great utility. However, there is one vital part to any type of mathematical or arithmetic library especially when it is being evaluated within a computational framework, context or domain especially within the integer domain, and that is integer division in regard to its remainder as opposed to just the division itself. No such library is complete without having the ability to perform the modulus operator. Not all but many languages use % to represent this type of operation. It would be nice to see a follow up video extending this library to include such a common operation. Even though the modulus operator itself is fairly considered an elementary or basic operation or operator, its implementation is complex enough that it would almost warrant its own separate video. Why do I mention this? It's quite simple. If one wants to use this as an underlying math library and wants to extend this into using it within other domains such as with performing or evaluating trigonometric functions such as sine, cosine, tangent, exponential functions such as e^n or even logarithmic functions as well as extending into other types of number systems such as in various vector spaces, particularly but not limited to the complex numbers; having the modulus operator as being an already well defined and operational operator between two operands is vital into performing most other complex types. In simple terms, the modulus operator (%) is just as important or significant as other operators such as +, -, *, /, ^, root (exp, rad). And this is just the arithmetic half, there is still the logical half of operators. Other than that, great video!

  • @MrMadzina
    @MrMadzina13 күн бұрын

    for fp_abs why not just return abs of a? return abs(a); seems to work fine in C#: public FixedPoint Abs() { return new FixedPoint(Math.Abs(Value)); }

  • @LowByteProductions
    @LowByteProductions13 күн бұрын

    Nice! The reason I didn't use it in the video is because this implementation allows the user to provide the integer type. The C library absolute functions are type dependant, so it would go against that aspect.

  • @MrMadzina
    @MrMadzina13 күн бұрын

    Also in C# Math.Floor(-18.2f) returns -19

  • @beyondcatastrophe_
    @beyondcatastrophe_13 күн бұрын

    I think what would have been nice to mention is that floating point is essentially scientific notation, i.e. 12.34 is 1.234e1, just that floats use 2^n instead of 10^n for the exponent, which is where the scaling you mention comes from

  • @LowByteProductions
    @LowByteProductions13 күн бұрын

    Certainly - this is probably a lot clearer in the video I made about floating point a few years back. Though of course, part of what makes floats complex is the edge cases where that doesn't apply as smoothly: sub-normals/denormals, infinities, NaNs, etc

  • @aleksikuikka6271
    @aleksikuikka627113 күн бұрын

    That's quite an important intuition. If you said that you calculated something in scientific notation with a fixed number of significant digits, nobody would think there's anything weird or arbitrary about it. There's also probably some argument to be made about the expected error in measurements made from natural processes following a normal distribution, where the error is likely proportional to the scale of the mean. Like if you're measuring a big number, you probably expect the error to be similarly 'big'. The alternate hypothesis would be that the deviation would be smaller the bigger scales you work with, so you'd expect the distribution to get thinner and shorter tailed, which doesn't immediately seem like a natural assumption to me. Software engineering wise, if your hardware has a floating-point unit, I don't think there's any unanimous argument to switch away from using your hardware to the fullest. If you don't know what you're doing with your fixed point numbers, you probably shouldn't be using them, you're probably just adding yourself unnecessary complexity in the best case (e.g., work with strange engineering units, add the logic and possible extra variables to do the calculations etc.), assuming you don't outright lose precision or performance due to the implementation. Whereas, if you do know what you're doing, and you have specific requirements to work with, where fixed point just works better to satisfy those requirements, then by definition you probably should be using that.

  • @StarryNightSky587
    @StarryNightSky58713 күн бұрын

    *IEEE 754 entered the chat*

  • @ashelkby
    @ashelkby13 күн бұрын

    Actually 10011100 is -100 is two's complement representation.

  • @LowByteProductions
    @LowByteProductions13 күн бұрын

    Ah you're right, not sure what happened there

  • @caruccio
    @caruccio13 күн бұрын

    Really entertaining video. Thanks!