Bug in Binary Search - Computerphile

Mike talks through a binary search bug that was undiscovered for years!
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharanblog.com
Thank you to Jane Street for their support of this channel. Learn more: www.janestreet.com

Пікірлер: 891

  • @HouseExpertify
    @HouseExpertify6 ай бұрын

    Just a random little fact: You can use underscores to make numbers more readable in java (for example: 1_000_000.)

  • @Qbe_Root

    @Qbe_Root

    6 ай бұрын

    It's also a thing in Python 3.6+, JS, Rust, C++14 (though with an apostrophe instead of an underscore), and probably a bunch of others

  • @wboumans

    @wboumans

    6 ай бұрын

    ​@@Qbe_Rootand c#

  • @AterNyctos

    @AterNyctos

    6 ай бұрын

    Niice. That will come in handy for a project I'm working on. Many thanks! :D

  • @isotoxin

    @isotoxin

    6 ай бұрын

    🖤

  • @sly1024

    @sly1024

    6 ай бұрын

    In pretty much any language! Rust, python, C#, etc.

  • @dexter9313
    @dexter93136 ай бұрын

    It's funny how so many comments point out the performance cost of adding one arithmetic operation. They overlook the fact that arithmetic operation between two already loaded registers is almost instant vs the cache miss monster which is accessing the large array at random positions. You won't measure any significant difference.

  • @axel77killer

    @axel77killer

    6 ай бұрын

    Yep. That might have been a valid concern 40/50 years ago, but not today

  • @Herio7

    @Herio7

    6 ай бұрын

    I'm more baffled that those people preferred speed over correctness in logN algorithm...

  • @dexter9313

    @dexter9313

    6 ай бұрын

    @@Herio7 Speed may be a reasonable choice sometimes if it's significant and you can assert that your use case won't be problematic regarding correctness. But even then, speed won't even change by a significant amount here.

  • @lethern2

    @lethern2

    6 ай бұрын

    true that, its the people who never measured performance themself and rely on their (very) limited knowledge of the insanely sophisticated CPU

  • @collin4555

    @collin4555

    6 ай бұрын

    The people who will spend an hour and a megabyte to save a microsecond and part of a byte

  • @LarkyLuna
    @LarkyLuna6 ай бұрын

    The error is funnier in languages that have unsigned types The sum/2 will end somewhere inside the array and not throw any errors, just search weird places

  • @rahul9704

    @rahul9704

    6 ай бұрын

    Takes longer to discover the bug, but it's more fun I promise!

  • @DoSeOst

    @DoSeOst

    6 ай бұрын

    That's exactly the comment, I was going to write. 🤝🖖

  • @mytech6779

    @mytech6779

    6 ай бұрын

    The amd64 ISA has hardware overflow flags on registers. So there is saturating arithmetic in some languages that just returns maximum value of the type in the event of a rollover.

  • @FrankHarwald

    @FrankHarwald

    6 ай бұрын

    YES! It's even sneakier if using unsigned types because the values won't even become negative but wrap around to smaller but still wrong values so that often you don't get page faults but simply wrong answers.

  • @th3hutch

    @th3hutch

    6 ай бұрын

    This is why in C++ unsigned integer overflow is undefined behaviour.

  • @B3Band
    @B3Band6 ай бұрын

    There is a LeetCode binary search problem specifically designed to teach you about this. left + (right - left) / 2 is algebraically equivalent to (left + right) / 2 and avoid overflow. It's a handy identity to memorize for coding interviews.

  • @asagiai4965

    @asagiai4965

    6 ай бұрын

    Doing this before leetcodes

  • @feliksporeba5851

    @feliksporeba5851

    6 ай бұрын

    Now try (left & right) + ((left ^ right) >> 1)

  • @__Just_This_Guy__

    @__Just_This_Guy__

    6 ай бұрын

    Or better yet: (left>>1) + (right>>1)

  • @U20E0

    @U20E0

    6 ай бұрын

    note that it can still overflow in general. A/2 + B/2 however can't ( i think )

  • @AxelStrem

    @AxelStrem

    6 ай бұрын

    @@__Just_This_Guy__ have you tested it on two odd numbers

  • @Blackread
    @Blackread5 ай бұрын

    Fun fact: l + (r-l)/n is a general formula for equal division where n is the number of subsections. (l+r)/2 is just a special case that happens to work for binary search, but when you move up to ternary search (two division points), (l+r)/3 no longer cuts it and you need the general formula.

  • @scottbeard9603

    @scottbeard9603

    5 ай бұрын

    Someone please explain this with an example. It looks so obvious but I don’t know what it means 😂

  • @thijsyo

    @thijsyo

    5 ай бұрын

    ​​@@scottbeard9603lets say L=10 and R=40. You want to divide into 3 equal pieces instead of 2. Formula (L+R)/3 would tell you to split at (40+10)/3 = 50/3 = 16 (and 33 if you do 2(50/3). General formula L + (R-L)/N will tell you to split at 10+30/3 = 20 (and 30 if you do 10+2(30/3)), giving you a split into 3 equal parts.

  • @SharienGaming

    @SharienGaming

    5 ай бұрын

    @@scottbeard9603 since someone else already replied with an example - here have the working out of why it doesnt work for 3, but does work for 2^^ lets have a look at the case of n = 2: l + (r-l)/2 = (2l)/2 + (r-l)/2 = (2l + r - l)/2 = (l + r)/2 now lets look at what happens when you try the same with n = 3: l + (r-l)/3 = (3l)/3 + (r-l)/3 = (3l + r - l)/3 = (2l + r)/3 != (l+r)/3 in general: l + (r-l)/n = (nl)/n + (r-l)/n = (nl + r - l)/n = ((n-1) l + r)/n so technically that way of writing it down does work... as long as you dont omit the n-1 mind you, having the computer work out that division point that way will only get way worse as you add more subdivisions - better to keep the numbers small^^

  • @firstname4337

    @firstname4337

    5 ай бұрын

    "Fun fact" -- you obviously don't know the meaning of the word "fun"

  • @kirillvourlakidis6796

    @kirillvourlakidis6796

    2 ай бұрын

    Well, I had fun.

  • @TheFinagle
    @TheFinagle5 ай бұрын

    I love that remark about not having a bug in HIS code because Python protects you from it. But also recognizing this is can be a real problem sometimes and teaching us how to avoid it.

  • @gtsiam
    @gtsiam5 ай бұрын

    In java in particular, you could just do: (l+r) >>> 1. This should give the correct answer even if l+r overflows, by treating the resulting sum as unsigned during the "division by 2" step.

  • @MichaelFJ1969

    @MichaelFJ1969

    5 ай бұрын

    I think you're wrong: If l and r are both 32 bits in size, then l+r will be 33 bits. Where do you store this intermediate upper bit?

  • @gtsiam

    @gtsiam

    5 ай бұрын

    @@MichaelFJ1969 That's the thing: They're not 32 bits - they are 31 bits. Java does not have unsigned integers. And because of the properties of the twos complement bit representation used in signed numbers, l+r will have the same representation as if l and r were unsigned - but division does not have this property. Luckily we can do a bitshift to emulate it. I also know that llvm can be weird about unsigned pointer offsets (or so the rust docs say), so this trick will probably (?) also work in C/C++ - but I'd have to dig into the docs to make sure of that.

  • @roge0

    @roge0

    5 ай бұрын

    That's actually what OpenJDK's (the reference Java implementation) binary search does too. If anyone's curious what >>> does in Java, it's the unsigned right shift. Right shifting by 1 is the same as dividing by two, but a typical right shift (>>) or division by two will leave the leading bit intact, so if the number was negative, it will stay negative. An unsigned right shift fills the leading bits with zeroes, so when the value is interpreted as a signed value, it's always positive.

  • @williamdrum9899

    @williamdrum9899

    5 ай бұрын

    Java doesn't have unsigned integers? Wow that is terrible.

  • @simon7719
    @simon77196 ай бұрын

    Let's try to offer an alternative that does what so many seem to be expecting: division before the addition: m = l/2 + r/2 This breaks when l and r are both odd as there are two individual roundings, for example 3/2+5/2=3 in integer arithmetic, which prevents the algorithm from making progress beyond that point. What you could do is add the last bit back with some bitwise operations (which boils down to "if both l and r are odd, then add 1 to the result"): m = l/2 + r/2 + (l & r & 1) Or just do it as in the video. The /2 will almost certainly be optimized into >>1 by the compiler if it is advantageous on the target cpu.

  • @Greenmarty

    @Greenmarty

    6 ай бұрын

    I'm sure most of us plebe would use reminder operator to add remainders if we for some reason wanted to complicated the code by making 4 divisions instead of only one.

  • @rabinadk1

    @rabinadk1

    5 ай бұрын

    I planned to use a floating point to remove doing the bitwise operation and later cast it back to int.

  • @simon7719

    @simon7719

    5 ай бұрын

    @@rabinadk1 Sounds like a huge extra can of worms and also likely a bit slower (although might not be noticeable compared to the cost of memory accesses).

  • @macavitymacavity126

    @macavitymacavity126

    4 күн бұрын

    My first idea🤣 Thx for highlighting the problem it would have and for the solution ;0)

  • @IARRCSim
    @IARRCSim6 ай бұрын

    6:13 "when your integer becomes 33 bits" He probably means when it requires 32 bits unsigned or more than 32 bits signed. The 2.1 billion he mentions before is roughly 2^31 so he's mostly talking about the range limits of signed 32-bit integers. Unsigned 32-bit integers go up to over 4 billion.

  • @willd4686
    @willd46866 ай бұрын

    Our Prof had a rubber shark he called Bruce. It was supposed to remind us of trouble with integers lurking in the deep. I can't remember the exact lesson. Prof Bill Pulling. Great guy.

  • @ProjSHiNKiROU
    @ProjSHiNKiROU6 ай бұрын

    Rust has a function for "midpoint between two numbers" for this exact situation somehow.

  • @Originalimoc

    @Originalimoc

    6 ай бұрын

    😂 what

  • @VioletGiraffe

    @VioletGiraffe

    6 ай бұрын

    C++ just recently added std::midpoint function as well.

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    This is called hadd or rhadd (h=half, r=right) in OpenCL, and there's mix() for arbitrary sections of floating point values. It's an ancient issue; compare e.g. Forth's */ word, which conceptually does a widening multiply before a narrowing divide.

  • @dojelnotmyrealname4018

    @dojelnotmyrealname4018

    6 ай бұрын

    It's almost as if this particular operation is remarkably common actually.

  • @VioletGiraffe

    @VioletGiraffe

    6 ай бұрын

    @@dojelnotmyrealname4018, it is very common! It's arithmetic average of two values, and any codebase is bound to have more than a few of those. What's not common at all is operating with values that risk overflowing. Especially in 64 bits; with 32 it's much more of a concern.

  • @schoktra
    @schoktra6 ай бұрын

    Integer overflow is how the infamous infinite lives bug on the original Super Mario Bros. for NES works. Lives are stored as a signed value so if you go over what fits you get weird symbols for your number of lives and it becomes impossible to lose cuz the way the math is set up you can’t subtract from a full negative number and wrap back into the positives, but can add to a full positive and wrap into the negatives. Since it only checks if you’re equal to 0 not lower than it, you end up with infinite lives. But it overflows into other important memory and causes other bugs as well.

  • @williamdrum9899

    @williamdrum9899

    5 ай бұрын

    That's weird that it corrupts other memory. I would have expected lives to be a single unsigned 8 bit variable. Especially since once you get more than 99 the game's print routine indexes out of bounds and starts showing adjacent graphics instead of digits. So obviously the game designers figured 'eh, nobody will get that many extra lives' and didn't bother to check.

  • @gregmark1688
    @gregmark16886 ай бұрын

    TBF, when Java was written in the 90s, there weren't too many cases where arrays with 2^30 elements were being used. It's not really too surprising it went undetected for so long.

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    At the time, Java's demand for 32-bit processing was remarkable and often wasteful. Today, waste has been normalized to the point people get offended if you remark a 100GB patch is excessive, and Java has been awkwardly forced into smartcards.

  • @der.Schtefan

    @der.Schtefan

    6 ай бұрын

    In the 90s, engineers would have never done l+r halved and floor bs, they would have done r-l, shift, and indexed address, because any Intel 8086 can do this in the address generator unit almost twice as fast, in fact some processors would even fuse this instruction sequence. Java, Python. Estrogen for Programs! Nothing else!

  • @diamondsmasher

    @diamondsmasher

    6 ай бұрын

    Programmers should know to check their indexes before arbitrarily throwing them into an array though, that was a problem way back even in the days of C, it’s not a Java-specific bug

  • @gregmark1688

    @gregmark1688

    6 ай бұрын

    ​@@0LoneTech I always assumed they needed the 32 bit thing because they wanted to do a lot of linked lists or something, which are pretty useless in a 16-bit address space. They sure didn't do it for speed (I hope)!

  • @joshuascholar3220

    @joshuascholar3220

    6 ай бұрын

    Before 64 bit operating systems, you couldn't have an array that big anyway.

  • @AnindoSarker
    @AnindoSarker5 ай бұрын

    I wish I had teachers like you guys on my university. Thank you for making such great quality videos.

  • @nio804
    @nio8045 ай бұрын

    I was wondering at first why just r/2 + l/2 wouldn't work, but with integers, the parts would get floored separately and that would give wrong answers when both r and l are odd.

  • @chingfool-no-matter

    @chingfool-no-matter

    5 ай бұрын

    wouldn't r

  • @nimcompoo

    @nimcompoo

    5 ай бұрын

    i thik it should be r >> 1 + l >> 1

  • @dualunitfold5304

    @dualunitfold5304

    5 ай бұрын

    Plus, division is expensive compared to addition and subtraction. I'm not sure how much difference it would make in practice, but it makes sense to do it only once instead of twice if you can

  • @nimcompoo

    @nimcompoo

    5 ай бұрын

    @@dualunitfold5304 that is true, but integer division by 2 can be done by a single bitshift?

  • @dualunitfold5304

    @dualunitfold5304

    5 ай бұрын

    @@nimcompoo Yeah you're right, I didn't think about that :D

  • @abhishekparmar4983
    @abhishekparmar49836 ай бұрын

    iam convinced best teachers are really good communicators

  • @Stratelier
    @Stratelier5 ай бұрын

    I remember coding a binary-search function by hand once (and it was probably susceptible to this edge case). I specifically wanted it to search for a specific value and return its index, OR if the value was ultimately not found, return the index of the first greater-than value (for use as an insertion point). Nothing too complicated technically, but DANG was it frustrating to debug.

  • @happywednesday6741

    @happywednesday6741

    5 ай бұрын

    Sounds like you should've used a hash table

  • @tiagobecerrapaolini3812
    @tiagobecerrapaolini38126 ай бұрын

    This scenario reminds me about calculating the average using floating point values. The first instinct would be just to sum everything then divide by the amount of values, but floats get more imprecise the further from zero they go. So the average might be off if the intermediate sum is too big. A better approach might be going by partial average, since the intermediate values are smaller. But there are other techniques too that go over my head. I remember one day finding a paper with dozens of pages just detailing how to average floating point numbers, it's one of those problems that at first appear to be simple but are anything but that.

  • @mina86

    @mina86

    6 ай бұрын

    Kahan sum is your friend.

  • @MichaelFJ1969

    @MichaelFJ1969

    5 ай бұрын

    Yep. It's really a matter of "pick your poison".

  • @l33794m3r
    @l33794m3r5 ай бұрын

    5:23 the graphic is wrong. it'd result in a negative sum as r>l. it should say r-l.

  • @JMcMillen
    @JMcMillen6 ай бұрын

    There is a little bit of significance to the number 17. If you ask people to pick a number from one to twenty, apparently 17 is the most popular choice. Plus, if you have a unbalanced twenty sided die that rolls 17 all the time, people are less likely to realize somethings up as it's not as likely to be noticed vs a die that keeps rolling 20. Especially since it will get obfuscated by different modifiers that get added to the roll so it's usually never just a 17. Also, it's prime.

  • @asynchronousongs

    @asynchronousongs

    6 ай бұрын

    wait what why? source?

  • @cataclystp

    @cataclystp

    6 ай бұрын

    ​@@asynchronousongswhy would someone go on the internet and confidently spread misinformation 🙃

  • @Phlarx

    @Phlarx

    6 ай бұрын

    @@asynchronousongs I have no sources, but it does make sense that 17 is the most "random-looking" number from the group. Evens are roundish, so they're out. Same with multiples of 5. Single digit numbers are too simple. 13 has a reputation for being either lucky or unlucky. 19 is too close to the maximum. 11's two digits match. The only number left is 17. Would be interested to see if someone can find an actual source though.

  • @mxMik

    @mxMik

    6 ай бұрын

    17 is known to be "the least random number", the "Jargon File" say.

  • @Uerdue

    @Uerdue

    5 ай бұрын

    I was expecting him to choose 42. 😢

  • @JohnSmith-qc4ye
    @JohnSmith-qc4ye6 ай бұрын

    Happened to me a decade ago in my home build embedded application. Eeprom with 65536 bytes. Using unsigned index variables only. No signed index at all !! The overflow wrap around is sufficient to cause that bug. Would have caused an endless loop at startup in the eeprom search. Luckily I ve spotted it in a code review befor Eprom was more than half full/used. Thanks for that video.

  • @zimriel

    @zimriel

    5 ай бұрын

    ho li fuk that takes me back. i remember eprom's from my TSR days in the middle 1980s

  • @morwar_
    @morwar_5 ай бұрын

    The way of explaining this was really good.

  • @theantipope4354
    @theantipope43546 ай бұрын

    4:31 The problem is even worse if your language *doesn't* do overflow or bounds-checking, (more common than you might think!) in which case your code will be looking at memory outside your array & Very Bad Things will happen. The way to prevent this in your code is to subtract your current position (CP) from the size of your array, (integer) divide that by 2, add it to CP, giving your next CP. This will work for any size of array that is no larger than your largest possible positive integer. This, of course, is how you handle a task like this in assembler.

  • @jnawk83

    @jnawk83

    6 ай бұрын

    this comment is the whole video summed up.

  • @Uerdue

    @Uerdue

    5 ай бұрын

    In assembler (well, in x86 at least), you can just add the numbers anyway, then do the division with a `rotate right with carry` instruction and are done. :D

  • @Yotanido
    @Yotanido6 ай бұрын

    "That's a Python comment, not a Java comment" THE PAIN! Every goddamn time!

  • @AnttiBrax

    @AnttiBrax

    6 ай бұрын

    Appropriate punishment for using end-of-line comment. 😂

  • @michelromero7671
    @michelromero76716 ай бұрын

    Binary Search can be used for more complex tasks, like finding the answer to a problem where you know the range of the possible answers and the problem boils down to solving a monotonic function. I recently stumbled upon a bug with the (l + r) / 2 when l and r can be negative numbers. I was using C++, and in C++ the integer division round in the direction of 0; so, for example, 3 / 2 = 1 and -3 / 2 = -1. But I was expecting -3 / 2 = -2, as it is the nearest integer less than the actual result. In Python that is the behaviour of the integer division: -3 // 2 = -2.

  • @michelromero7671

    @michelromero7671

    6 ай бұрын

    @@GeorgeValkov yep, I learned about it after I found the bug.

  • @Shubham_Chaudhary

    @Shubham_Chaudhary

    6 ай бұрын

    These are insidious bugs, I ran into one due to different rounding modes for floating points (towards zero, nearest, round up, round down). I guess the integer division is rounding towards zero.

  • @yeetmaster6986

    @yeetmaster6986

    6 ай бұрын

    ​@@GeorgeValkovwhat's that? I'm new to c++, so i don't really understand what that means

  • @lodykas

    @lodykas

    6 ай бұрын

    That's bit shifting. It exists in all languages, but it's a "low level" operation. Anyways, since each bit in a binary number is a power of two, shifting the representation is the same as doubling/dividing by 2 depending on endianess. Now most languages implement their bitshift operators indépendantly of endianess tho

  • @lodykas

    @lodykas

    6 ай бұрын

    In this case of right shifting, the unit bit is discarded, so it's integeger division, the remainder is discarded. Note that the unit sigit being the same as the potential remainder only works because the divisor is the same of the base (like dividing 42/10 as remainder 2, the last digit. )

  • @lborate3543
    @lborate35436 ай бұрын

    Great job with the lighting.

  • @MrHaggyy
    @MrHaggyy5 ай бұрын

    Quite interesting that this was a hidden problem in Java for so long. In microarchitectures this is a well known problem for decades as the numbers don't need to be absolut big, it's enough for both the left and right side number to be greater than half of the max. number representation. There are also patterns where you divide both l and r by 2 with bitshifting every iteration and add or substract them together according the controlflow.

  • @CubicSpline7713
    @CubicSpline77136 ай бұрын

    Nicely explained.

  • @Nellak2011
    @Nellak20116 ай бұрын

    Whenever he said "l+r" I already knew from experience to write some kind of code to prevent an overflow. Has no one else had to endure a C++ class where everything is trying to make your program break?

  • @worabnag

    @worabnag

    6 ай бұрын

    C++ programmers have a different mindset 😂

  • @Takyodor2

    @Takyodor2

    6 ай бұрын

    @@worabnag _C++_ has a different mindset, it will try to break _you_

  • @3rdalbum

    @3rdalbum

    5 ай бұрын

    People who only know high-level languages will not likely be thinking of overflows. If they got a crazy error message about an array being indexed at negative one million or whatever they might eventually realise what is going on, but hand on my heart I'm sure it would take me a while. Hours or days. I wouldnt have anticipated it before time.

  • @Nellak2011

    @Nellak2011

    5 ай бұрын

    @@3rdalbum I primarily use Javascript and that language is so poorly designed that it has me writing custom code to verify an integer instead of it having that built-in type. I think because I am so used to having to fight against the language constantly and being forced to adopt an extremely defensive style of coding, I was more aware of such an error, despite javascript being a higher level language.

  • @AndreuPinel
    @AndreuPinel6 ай бұрын

    I remember a kind-of similar bug in Intersystems Caché: 4 * 3 / 6 returned 2, but 4 / 6 * 3 returned 2.00001 (and this little butterfly ended up destroying the city of New York). This commutative/grouping operations that are the basics in arithmetic, in our code some times they create a lot of mess when released into production environments. I think it is important to add comments in the code so the newer generations understand why we make these changes. E.g.: int m = l + (r - l) / 2; // =====> Do NOT change to m = (l + r) / 2 =====> it can potentially lead to a positive int overflow

  • @Elesario

    @Elesario

    6 ай бұрын

    Looks like your first example is just an example of floating point coercion (assuming you expected integers), along with the fact that floating point numbers are an approximation of a value, so due to the underlying math behind them you sometimes get tiny weird rounding errors.

  • @tatoute1

    @tatoute1

    6 ай бұрын

    One has to be a fool to think computers can support ℝ numbers. They do not, it is absolutely not possible. As such they do not support associativity or commutativity, or many other properties, even the more obvious. 1+ε-1 may not be ε. etc... Even Integer support is partial, at best. Newbies think they can solve the problem by using "fuzzy" rounding or other tricks. Nerds know they have to proof the code they wrote.

  • @px2059

    @px2059

    5 ай бұрын

    No need to add comment. Add a unit test with big number. It will fail when someone changes it.

  • @landsgevaer
    @landsgevaer6 ай бұрын

    For demonstration purposes, could also have defined l and r as byte or short integers...

  • @ats10802b

    @ats10802b

    6 ай бұрын

    The array index are always int

  • @landsgevaer

    @landsgevaer

    6 ай бұрын

    @@ats10802b Not a big Java user here, but can't you index with a fewer-bit integer in Java? I thought it would be implicitly cast. The point is. you could define the l and r variables as 1-byte or 2-byte ints, then you don't need the billion-element array to recreate the bug.

  • @tylerbird9301

    @tylerbird9301

    6 ай бұрын

    i don't think you can have anything other than int for indicies

  • @akompanas

    @akompanas

    6 ай бұрын

    IIRC Java does arithmetic in int and long only, so shorts and bytes won't actually have this problem. Also, this bug didn't get spotted for so long because nobody had enough RAM to hold arrays of such sizes.

  • @phiefer3

    @phiefer3

    6 ай бұрын

    @@tylerbird9301 I think what he's getting at is that the overflow doesn't actually happen at the indexing part of the code, but at the addition part of it. So if L and R are defined at a smaller datatype, then when you added them they'd still overflow resulting in a negative number when it gets used as the array index.

  • @tolkienfan1972
    @tolkienfan19726 ай бұрын

    I like the related ternary search used to find a minimum in a convex array/function.

  • @MarkStoddard
    @MarkStoddard5 ай бұрын

    I haven't written binary search in a long time, but I wonder if I used to write it the "right" way. It does make more sense to me that I want to find the midpoint of this leftover chunk then index it by adding it to the end of the left bit.

  • @dkickelbick
    @dkickelbick6 ай бұрын

    Nice, I thought, the solution would be: m = l/2 + r/2, but maybe you get in trouble for odd number of l and r.

  • @B3Band

    @B3Band

    6 ай бұрын

    Integer division can't result in a decimal. In Java, 5/2 == 2, not 2.5 So for (l, r) = (1, 3), you get (1/2) + (3/2) = 0 + 1 = 1.

  • @thomasbrotherton4556

    @thomasbrotherton4556

    6 ай бұрын

    I thought the same at first, but addition is a simpler operation than division is why they did it this way. You could also do r - (r - l) / 2.

  • @SaHaRaSquad

    @SaHaRaSquad

    6 ай бұрын

    @@thomasbrotherton4556 Division by 2 is a simple bit shift

  • @SomeNerdOutThere

    @SomeNerdOutThere

    6 ай бұрын

    This was my first thought, though I was thinking with a bit shift as that should be faster: m = (l >> 1) + (r >> 1);

  • @DFPercush

    @DFPercush

    6 ай бұрын

    @@SomeNerdOutThere ... + (l & r & 1) . odd numbers fixed.

  • @martincohen8991
    @martincohen89916 ай бұрын

    Are there any situations when (l+r)/2 and l+(r-l)/2 give different values when (r-l)/2 is truncated?

  • @LarkyLuna

    @LarkyLuna

    6 ай бұрын

    L+R and R-L should have the same remainder mod 2 and will truncate the same way i believe L + (R/2 - L/2) maybe would be different than (L + R)/2? Testing an example L=1, R=6 (L+R)/2 = 3.5 → 3 L +( R/2 - L/2) = 1 + 3 - 0 = 4 Yup

  • @timothylawrence2789
    @timothylawrence27896 ай бұрын

    ok so some may have already posted this but couldn't you just do (R*0.5)+(L*0.5) to get the mid point? or am i missing something ?

  • @pvandewyngaerde
    @pvandewyngaerde6 ай бұрын

    I can see a similar overflow problem happening when summing up for an 'average'

  • @warlockpaladin2261

    @warlockpaladin2261

    6 ай бұрын

    😬

  • @pvandewyngaerde

    @pvandewyngaerde

    6 ай бұрын

    Divide by how much if you dont know the number of items yet ?

  • @DanStoza

    @DanStoza

    6 ай бұрын

    @@pvandewyngaerde You just have to keep track of the number you're currently on. For example, if you have two numbers, the average is (a + b) / 2. Let's call this A1. If you add a third number 'c', it's (a + b + c) / 3, which you can rewrite as (a + b) / 3 + c / 3. We can then rewrite the first term as A1 * (2 / 3), giving us A1 * (2 / 3) + c / 3, allowing us to divide before adding. Just to continue the example, if we call our last result A2, then when we add a fourth number 'd', we can compute A2 * (3 / 4) + d / 4.

  • @SaHaRaSquad

    @SaHaRaSquad

    6 ай бұрын

    @@DanStoza That would lead to less accurate results though, computers are bad with accuracy in divisions.

  • @alexaneals8194

    @alexaneals8194

    6 ай бұрын

    If you are dealing with super large numbers, just use a 64-bit integer. If you can max out 9 quintillion in the addition then used BCD (binary coded decimal), it's guaranteed to max out your memory before you can max it out. Also, it will be a performance hog.

  • @mausmalone
    @mausmalone5 ай бұрын

    A topic I would love to see Computerphile cover: relocatable code. How can a binary be loaded in an arbitrary memory location and still have valid addresses for load/store/branch? How did that work in the old days vs now? What is real mode and what is protected mode? Are there different strategies on different platforms?

  • @MichaelFJ1969

    @MichaelFJ1969

    5 ай бұрын

    Yes! I support your request!

  • @agasthyakasturi6236

    @agasthyakasturi6236

    5 ай бұрын

    The binary is loaded a different address each time ( I'm assuming PIE is set when compiling) and instructions within the binary are always at a constant offset from the base of the ELF with which it's loaded As long as the elf knows it's loading address the instructions are always at a constant offset

  • @volodumurkalunyak4651

    @volodumurkalunyak4651

    5 ай бұрын

    Real mode is an outdated operating mode, Intel is removing in X86S. Modern UEFI BIOS'es make it really hard to enter real mode (aka 16 bit mode) outside of SMM. Pushing UEFI boot with secure boot forward does just that. For now SMM (system management mode) starts in real mode and very first thing a CPU does AFTER entering SMM - switches into long protected mode (64 bit mode) inside a SMM mode.

  • @volodumurkalunyak4651

    @volodumurkalunyak4651

    5 ай бұрын

    One more thing about different modes. Modern ARM cores are majorly 64bit only (doesnt support 32bit neither for whole OS nor for just applications nor for ARM TrustZone (ARM variant of SMM mode on x86)

  • @lambdaprog
    @lambdaprog6 ай бұрын

    More of this please.

  • @bitman6043
    @bitman60434 ай бұрын

    also you can do (r + l) >> 1. shifting right will effectively divide by two regardless of overflow

  • @xJetbrains
    @xJetbrains5 ай бұрын

    The logical right shift >>> will also fix it, because it'll return from negative to positive if necessary: (l+r) >>> 1.

  • @beaconofwierd1883
    @beaconofwierd18836 ай бұрын

    Would it not be more efficient to just divide both by 2 inside then add them? Then you just have (r>>1) + (l>>1). Though you might have to add the remainder if both are odd, so (r>>1)+(l>>1) + 1&r&l Pretty much the same number operations but you’re not limited to using signed ints.

  • @rb1471

    @rb1471

    6 ай бұрын

    Well why not l + (r-l)>>1 and cut out the comparison. The addition/subtraction is nothing compared to division

  • @beaconofwierd1883

    @beaconofwierd1883

    6 ай бұрын

    @@rb1471 right, I was thinking l could be bigger than r, but that can never happen :)

  • @jmodified

    @jmodified

    6 ай бұрын

    (l + r) >>> 1 works as long as they are signed (limited to max positive int value). >>> is unsigned shift.

  • @kbsanders
    @kbsanders6 ай бұрын

    IntelliJ/JetBrains IDEs are awesome.

  • @TehPwnerer
    @TehPwnerer6 ай бұрын

    I wouldn't have thought to use pointers that way, obviously you subtract l from r and add that offset/2 to l. No overflows possible if you use pointer arithmetic correctly

  • @berndeckenfels
    @berndeckenfels6 ай бұрын

    You can also interpret the sum of two signed integers unsigned (has one more bit) and divide it by 2 to get it back into range

  • @Gokuroro
    @Gokuroro6 ай бұрын

    If the type of the length was an unsigned int, could it just become a loop without ever letting the user know? (example: if the new m for the user's luck becomes the first average, it would loop again until it got back to the back sum and go back to the initial average) Edit: fix the type of L, R and M should be unsigned int, actually (and length as well, I suppose?)

  • @prepe5
    @prepe55 ай бұрын

    funny i had that problem a few months ago while implementing a binary search on a Microcrontroller. I had to use a Word as the Index so i only had 65535 as Max index and i noticed that the overflow was not handled correctly in the basic binary search. It didnt even ocure to me that such an obvius problem was unknown for a long time.

  • @ZipplyZane
    @ZipplyZane6 ай бұрын

    A crossover with Numberphile would be nice here. You could have them show why (r+L)/2 = L+(r-L)/2. It's not too hard to show here, though. So I'll try: (r+L)/2 = r/2 + L/2 = r/2 + L/2 + (L/2 - L/2) = r/2 - L/2 + (L/2 + L/2) = (r-L)/2 + L = L + (r-L)/2 Bonus question: why not use the second step? 2 reasons: 1. Division is generally the slowest arithmetic operation, so you want to do as few of them as possible. 2. The fastest math uses integers. Integer division will mean the .5 part gets dropped. So, if both r and L are odd, the midpoint will be off by 1. At least, those are my answers.

  • @mb-3faze
    @mb-3faze6 ай бұрын

    Would have thought that L/2 + R/2 would have been better than L + (R - L)/2. L/2 is just a bit shift right, so pretty fast. Handling the case where both L and R are odd is not too difficult (just add one).

  • @Andersmithy

    @Andersmithy

    5 ай бұрын

    So you’re performing the same number of operations, but swapping a subtraction for division. But also you’re branching to add 1/4 of the time?

  • @mb-3faze

    @mb-3faze

    5 ай бұрын

    @@Andersmithy I suspect Mike's implementation is more reliable across architectures and compliers. Dividing by 2 is just a bit shift so L >> 1 + R >> 1 which has got to be pretty quick. The issue is you have to add (L && 1) && (R && 1) to the result to account for both being odd numbers. But there are no branches in the code. So ans = L >> 1 + R >> 1 + ((L && 1) && (R && 1)) (The compiler *could* have a branch in the logical bit - after all, if L && 1 is zero it doesn't have to do the R && 1 part) However, Mike's code is just L + (L - R) >> 1 so, yeah - I suspect subtraction is pretty much implemented in hardware. The thing is, L >> 1 and R >> 1 could be pre-computed and stored (as two equally long arrays), then, maybe my solution would be a femtosecond faster :)

  • @unkn0vvnmystery
    @unkn0vvnmystery6 ай бұрын

    5:11 You can also do ((x/2) + (y/2)). Some people may find this easier.

  • @dfs-comedy

    @dfs-comedy

    6 ай бұрын

    That has its own problems. As someone else pointed out, if l=3 and r=5 and your language's integer division operator rounds down, 3/2 + 5/2 gives you 3 and the algorithm loops forever.

  • @hololightful
    @hololightful6 ай бұрын

    I wish you would link the relevant video(s) in the description... At the start he refers to another video I haven't seen and need to go look for.

  • @hololightful

    @hololightful

    6 ай бұрын

    I guess not that big a deal... Only 3 videos back...

  • @karoshi2
    @karoshi23 ай бұрын

    I remember hitting that bug and investigating a bit on it. But it happened so rarely, that we decided not to put any effort into it. Don't remember exactly, would assume we didn't have test cases for that because you don't test standard libraries. "Millions of people use that every single day, and you think _you_ found a bug in that?!?" Still holds an appropriate amount of humility. But _sometimes_ ...

  • @Yupppi
    @Yupppi6 ай бұрын

    Funnily enough I just last week watched some C++ convention talk, might've been Kevlin Henney or someone else, mentioning this exact issue where the integers were so big they overflowed before getting the average. Might've actually been about satellite arrays. Perhaps it was Billy Hollis after all, but someone anyway. I was thinking maybe you'd just halve them first, but then again it's possibly two float arithmetic operations which isn't as lovely. Although, you could probably get away with a >> 1 type of trick and get almost free out of jail. Anyway Pound's method is pretty obviously better when it's just one halving and addition/subtraction.

  • @IAMDonk
    @IAMDonk5 ай бұрын

    I'm not sure if I like 'r' indexing the last element as opposed to one past the last element similar to a sentinal index you might get in C / C++ or is it just me? Is Java different?

  • @MilanFlower-dk5cm

    @MilanFlower-dk5cm

    2 ай бұрын

    It makes the code somewhat simpler as you can test for l==r rather than l+1>=r. If I remember correctly from the time I played with binary search...

  • @transcendtient
    @transcendtient6 ай бұрын

    Why does Java use signed integers for an array structure that doesn't allow negative integers?

  • @antoniogarest7516

    @antoniogarest7516

    6 ай бұрын

    Java primitives are signed. Also, operating with signed numbers can be less error prone than with unsigned numbers. For example, in C/C++ when you take unsigned number and subtract another unsigned number but greater to it, you'll get the wrong result. For example doing the following operation with unsigned 8 bit numbers: 1-3 won't be -2, it will be 254.

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    Firstly, the problem remains with unsigned integers; the incorrectly calculated index just might access defined memory and lead to more confusing misbehaviour, such as an infinite loop or incorrect answer. Secondly, it can be useful to apply an offset to an index, such as the (r-l)/2 value here, and in other algorithms it wouldn't be odd for such a step to be negative. Thirdly, Java doesn't know the number is an index until it's used to index, and the algorithm isn't array specific. There do exist languages which can restrict index types to match, like Ada or Clash. In Python negative indices index from the right.

  • @dan00b8

    @dan00b8

    6 ай бұрын

    The even funnier part is if they were unsigned the bug might have been harder to spot, as the overflow still happens, but this time it starts from 0, so it will give a valid index inside the array, thus not throwing an error. The result would still be incorrect, just harder to notice since no exception was thrown so it would be obvious something was fishy

  • @rafagd

    @rafagd

    6 ай бұрын

    Java doesn't do unsigned. The creators never liked the idea, and it's just the way the language is.

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    ​@@antoniogarest7516That just shifts the over/underflow boundaries around, though. -100-100 isn't 56 either. Java was designed with a 32 bits is enough attitude, Python switches to arbitrary precision, and Zig allows you to specify (u160 and i3 are equally valid types). Ada is also quite happy to have a type going from 256 to 511.

  • @blr-Oliver
    @blr-Oliver5 ай бұрын

    Java has 'unsigned bit shift right' operator '>>>' which works perfectly for division by powers of 2. (l + r) >>> 1 will work just fine. Intermediate result, the sum of two integers can at most overflow by a single bit which occupies the sign bit. No information is lost, it's just treated as negative number. So, when shifted back with single zero it produces correct positive number.

  • @svenbb4937
    @svenbb49375 ай бұрын

    In Java and C# the array length is an int. You can easily solve the problem by casting to long first. If you need larger arrays or matrices, they are usually sparse arrays or matrices, which need a more sophisticated datatype implementation anyway.

  • @ChrisM541

    @ChrisM541

    5 ай бұрын

    "You can easily solve the problem by casting to long first."...until the boundaries of long are breached, then we are back to square one. Rule #1: if you are writing a general-purpose routine, always, always write in a 'safe' way. Never, ever assume a limit when none has been implemented.

  • @svenbb4937

    @svenbb4937

    4 ай бұрын

    @@ChrisM541 As i said, the array length is guaranteed to be an integer in Java and C#. The range of int and long are exactly specified. Doesn't make sense to program 'safer' than the language spec. In fact. the current OpenJDK version even takes advantage of the fact, that int is signed: int mid = int mid = (low + high) >>> 1;

  • @maheshwarankirupa5965
    @maheshwarankirupa59655 ай бұрын

    I don't know if its just me, but what about (ceil(l/2) + floor(r/2)), this should be correct right??

  • @Nellak2011
    @Nellak20116 ай бұрын

    One other thing I would add is a defensive early return that checks if left is less than right as we assume, because if it is greater than right then it will lead to an underflow. if (l > r) { return new Error("Left pointer is greater than Right pointer. It is a programming bug."); } int midpoint = l + (r - l) / 2;

  • @dealloc

    @dealloc

    6 ай бұрын

    That won't ever be the case as long as we're searching the _index_ of an array, which can only be a positive integer. First; the following expression: (L + (R - L) / 2) will never underflow, as long as L>=0 and R>=0. This is because L is added back into the result of (R - L) / 2. So even if (R - L) / 2 would be a negative number we add back L to correct for it. This is simply because L is added back into the result of ((R - L) / 2). So even if the result of (R - L) / 2 is negative, it is corrected by the fact that we add back L. Secondly, it would be a compiler/runtime bug, because of the while loop counts from L until R which are bounded by 0...length of array, and only shrunk towards the midpoint resulting in a positive integer. In case we can have negative indices, then we'd need to condtionally check the bound and then (R + L) / 2 instead, otherwise fallback to the previous equation.

  • @rogo7330

    @rogo7330

    6 ай бұрын

    If you use signed integers, it would never happen in C, because signed integer overflow is undefined. Basically, compiler (CAN) assume that `l` will never be less than `r` because you never change them in that way. So, be carefull with assumtions like that.

  • @MrMikeCool
    @MrMikeCool5 ай бұрын

    How do you handle finding muliple matches in a binary search? I assume when find a match that you have to start a linear search to the left and right of the that and continue until you find something that doesnt match?

  • @0LoneTech

    @0LoneTech

    5 ай бұрын

    You can keep going with binary search, though many implementations only return the midpoint upon match, not the high and low range. Searching for any match, left or right edge are distinct goals, often expressed as looking for the first or last index a value could be inserted while maintaining order.

  • @oliverdowning1543
    @oliverdowning15435 ай бұрын

    Why not just divide both numbers by 2 then add them? (I know normally subtraction is faster than division but since it's dividing by 2 just bit shifting should work, plus if you use bit shifting of an integer instead of division then you can do away with flooring the result as well).

  • @colinmaharaj
    @colinmaharaj6 ай бұрын

    In a C dev, are the stdlib qsort and bsearch ok to use?

  • @rich1051414
    @rich10514145 ай бұрын

    l + (r - l) * 0.5 That is a standard linear interpolation function. Basically, you walk the value from 'l' to 'r', with the given progress value, which is 0.5 with the example above.

  • @SaddCat
    @SaddCat5 ай бұрын

    At 5:23 is it supposed to be R-L instead of L-R? Maybe it works both ways I don’t know.

  • @longbranch4493

    @longbranch4493

    5 ай бұрын

    Yeah, it should have been R-L. It won't work both ways since (L-R)/2 will be a negative value.

  • @philipoakley5498
    @philipoakley54985 ай бұрын

    Is there also any issue with any optimisers which could well 'simplify' the equation back to its original, or variants there of?

  • @pwhqngl0evzeg7z37

    @pwhqngl0evzeg7z37

    5 ай бұрын

    Depends on how you mean "any." Any respectable compiler should not have this bug.

  • @philipoakley5498

    @philipoakley5498

    5 ай бұрын

    Respectable compilers follow the standards, and their lies the problem. You are fighting the misunderstandings about unspecified, undefined and implementation defined behaviour. It's the YMMV problem. Expectations and reality often crash into each other.

  • @pwhqngl0evzeg7z37

    @pwhqngl0evzeg7z37

    5 ай бұрын

    @@philipoakley5498 This is a very impressionistic comment. Are you suggesting that it would sometimes be correct that an optimization changed behavior if it was someone's expectation?

  • @philipoakley5498

    @philipoakley5498

    5 ай бұрын

    @@pwhqngl0evzeg7z37 it can definitely happen. If it's allowed then the compiler can do what it likes, especially when it's a marginal case (less people complaining... ;-). It's even worse when when you try to debug because (IDE dependent) the optimisation gets switched off to allow line by line debug.

  • @pwhqngl0evzeg7z37

    @pwhqngl0evzeg7z37

    5 ай бұрын

    @@philipoakley5498 Sure, it could happen hypothetically, but this would be a bug, hence not a respectable compiler.

  • @5-meo-dmt299
    @5-meo-dmt2995 ай бұрын

    I like to just implement binary search using bitwise operations. So, for the index that you are looking at, just go through the bits one by one (starting at most significant bit and starting with index zero), set them to 1, and then reset them to 0 if the index is too high. Just make sure to check whether you are within bounds. This way, you don’t need math and therefore can‘t run into integer overflows.

  • @grivza

    @grivza

    4 ай бұрын

    That sounds so wasteful and also weird. Say have an index of 12, everything up to 16 is too big (that's like at least 27 comparisons), then you reach 8, 8 is okay, 8 + 4 is too big, so we turn 4 off again (right?), then 8+2 is okay, and 8+2+1 is okay. So we end up with 11? Or do we stop at 8, which again is not correct. What reason do we ever have to turn off the 8?

  • @geraldakorli
    @geraldakorli6 ай бұрын

    Could you just do l/2 and r/2 then add both answers. Is that efficient?

  • @Vaaaaadim

    @Vaaaaadim

    6 ай бұрын

    If l and r are integers, and we do integer division, then l/2 + r/2 may not necessarily be the midpoint of l and r. In Java for example, 5/2 + 9/2 = 2 + 4 = 6, but the midpoint of 5 and 9 is supposed to be 7.

  • @miamor_un
    @miamor_un5 ай бұрын

    Hey, It was a really great explanation, appreciate the effort. There is one small doubt, what if we do L/2 + R/2, this should also work as both L and R in the range so L/2 and R/2 are in the range too.

  • @someaccount3438

    @someaccount3438

    5 ай бұрын

    I was thinking the same, but l + (r - l) / 2 is only 1 division, so maybe it is more efficient.

  • @alstuart

    @alstuart

    5 ай бұрын

    L/2 + R/2 gives the incorrect result when L and R are both odd numbers. Can you think of why?

  • @fox_the_apprentice

    @fox_the_apprentice

    5 ай бұрын

    @alstuart @miamore_un **Assuming Java:** It gives the incorrect result for both even and odd numbers, because L and R aren't defined variables. That's easy to fix by correcting the variable names to l and r. (Java variable names are case-sensitive.) It also gives the incorrect result for odd numbers due to integer division, but that's also easy to fix by changing it to l/2.0+r/2.0 . Regardless, their intent is correct. Who knows, maybe they were writing code in a language that doesn't do integer division like that, and which has case-insensitive variable names!

  • @fox_the_apprentice

    @fox_the_apprentice

    5 ай бұрын

    @@alstuart Not sure my other comment notified you correctly. Sorry if this is a double-ping!

  • @slipperynickels
    @slipperynickels6 ай бұрын

    needing a second to think of the number right before 1.2B is super relatable, lol

  • @Merthalophor
    @Merthalophor5 ай бұрын

    As usual, Rust is safe with no overhead. Just a fun fact about this language: It won't let you implement this algorithm without specfying explicitly what's going to happen in the case of overflow. It literally won't compile. You have to use methods like `overflowing_add` and similar to do what you want to do. Sounds like a pain, but is actually less an issue than you'd expect, because in fact, the only place where you're really making integer operations like that is when you're working with a data where an overflow would be fatal (like e.g with array indices). In cases of float types, you don't have to specify this (but you can't compare floats with `==` ad hoc, you have to know what you're doing).

  • @nonnufan
    @nonnufan5 ай бұрын

    Bit captivated by the lighting in this ep. Made me feel like Dr Mike was giving me a Voight-Kampff test.

  • @Gunbudder
    @Gunbudder5 ай бұрын

    this is more of a standard practice of embedded software engineering than a "bug" in the binary search. you always consider intermediate overflows when doing data unit conversions or working with fixed point decimal numbers.

  • @raydaypinball
    @raydaypinball5 ай бұрын

    What would have happened if the code had used unsigned integer as the type? Just incorrect results, right ? Is that why they chose to use signed integer so if something goes wrong, at least you know about it with an exception?

  • @dickybannister5192
    @dickybannister51925 ай бұрын

    haha. nice. as an aside, could I request a topic? been reading a lot about the progress in SAT Solvers over the last decade or so. tried to watch one of Marijn Heule's videos (think it was on the Simons Insitute) but got a bit lost (too many acronyms!) but sounded really, really useful, aside from the big numbers that make the popular science news. the trade off between fast and clever intrgiued me. so did the idea of looking for a solution vs showing their isnt one.

  • @darkhorse1200
    @darkhorse12006 ай бұрын

    Just wondering, would using a long data type not work for this?

  • @lokedhs

    @lokedhs

    6 ай бұрын

    It would. It's slightly slower though. Especially on 32-bit architectures.

  • @skeetskeet9403

    @skeetskeet9403

    6 ай бұрын

    @@lokedhs and that's why modern languages have a specific pointer-width unsigned integer type which is exactly enough to address any object in memory on the target platform, the fastest option that provides correctness See: Rust's usize, C/C++'s size_t

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    Nothing in the binary search algorithm requires your search space to be something physically stored in memory. Consider e.g. doing a binary search on a function to calculate a root.

  • @MatthisDayer

    @MatthisDayer

    6 ай бұрын

    @@lokedhs if you're on a 32-bit platform and dealing with billions of records, int overflow is just one of your concerns

  • @phiefer3

    @phiefer3

    6 ай бұрын

    Using a larger data type doesn't fix the problem though, all it does is move it somewhere else. Maybe it moves it far enough away that nobody encounters it, or maybe other restrictions in your specific use case can't encounter it. But in terms of a general implementation of a binary search, the issue of possible overflow will still exist. You either need to be using something like Java that will sidestep the overflow issue, or you need to implement something like in the video so that overflows simply cannot happen.

  • @SojournerDidimus
    @SojournerDidimus5 ай бұрын

    Does Java allow for a shift right without sign extension? Because then the eventual result will be valid again.

  • @nutsnproud6932
    @nutsnproud69326 ай бұрын

    I learned r-l in college on a PDP11 running COBOL as we had to keep the numbers small on a database for theatre ticket sales.

  • @Takyodor2
    @Takyodor26 ай бұрын

    I'm really surprised that this program takes noticeable time to run, is filling the array really that slow? (I expected that the CPU would be able to assign one index per cycle at least, maybe it is allocation that's slow?)

  • @turdwarbler

    @turdwarbler

    5 ай бұрын

    when you create a large array the memory is not actually allocated in physical memory at that time. Its only when its indexed is a physical page usually 4K allocated into the virtual address space. Create a loop when you initialise a very large array, then go back and do it a second time. The second time will normally be faster as all the memory has been allocated and assigned to physical memory

  • @AlberioOrion
    @AlberioOrion5 ай бұрын

    Why not just use the normal maths answer of r/2+l/2? Is it just an attempt to save on a slightly costly division operand or am I missing something else?

  • @seapearl3175
    @seapearl3175Ай бұрын

    What about doing 2 divisions and adding the two halfs?

  • @asagiai4965
    @asagiai49656 ай бұрын

    I'm used and mostly using. The minus approach to find the middle. Idk that thing was counter intuitive by then.

  • @dhruvagole7651
    @dhruvagole76515 ай бұрын

    I didn't understand, being from a c programming background how is array size limited by a language? Isn't that an addressable restriction depending on the architecture?

  • @-aexc-

    @-aexc-

    5 ай бұрын

    some languages do a lot of data management in the background with fancy data structures. some languages use dynamic arrays by default

  • @conorstewart2214
    @conorstewart22145 ай бұрын

    Is there not a better way than “l+ (r - l)/2”? Couldn’t you instead just do “l/2 + r/2” and since dividing by 2 is just a bit shift it would only require two bit shifts and an addition, rather than one addition, one subtraction and a divide/bit shift? Also since the divide/bit shift and then addition it would ensure that the integer doesn’t overflow just like the used formula does.

  • @ZenithWest169
    @ZenithWest1695 ай бұрын

    Couldn't you do L>>2 + R>>2 (bit shift L and R over by one and add them together)? Effective (L+R)/2 is also equal to L/2 + R/2. Normally for efficiency you want to reduce if's and division unless the division is a multiple of 2.

  • @fox_the_apprentice

    @fox_the_apprentice

    5 ай бұрын

    Dividing by two via this method is not recommended. The Java compiler already optimizes divisions, and l/2+r/2 is much more human-readable. Remember, the person updating your code 5 years from now might be an intern still in college; write it with them in mind. This is also true for C++. I'm assuming it's also true for most other languages. There is a Stack Overflow question for "which-is-better-option-to-use-for-dividing-an-integer-number-by-2" which goes into more detail about which is more appropriate for the situation. I'll post a URL directly to the question in my next comment, as KZread comments containing URLs are held for review.

  • @jeromethiel4323
    @jeromethiel43236 ай бұрын

    Would not a simpler fix be to just divide the two original numbers (L and R in this case) by two, and then add the result?

  • @xtifr

    @xtifr

    6 ай бұрын

    That's still 3 operations (two divisions and an addition), so no, not really.

  • @jeromethiel4323

    @jeromethiel4323

    6 ай бұрын

    @@xtifr Still a way to solve the problem by making sure the numbers do not overflow. May not be computationally superior, but it *IS* a solution to the problem. I'll admit that doing an integer subtraction then a division, then an addition is computationally more efficient. I was looking at it from a pure mathematics perspective.

  • @simon7719

    @simon7719

    6 ай бұрын

    ​@@jeromethiel4323consider what happens if you do this with numbers 1 and 3. In integer math the result is not 2.

  • @jimr7987
    @jimr79875 ай бұрын

    This only changes the place where a bug appears. If an array is defined indexed from -1,000,000,000 to 1,000,000,000 (minus a billion to plus a billion) then then very first step in a binary search will overflow with the new formula. But the original formula will not. To fully fail safe the code would require an if statement to choose the expression that does not overflow...

  • @ecjb1969
    @ecjb19696 ай бұрын

    Why not L

  • @germanpaulolustosatorres5285
    @germanpaulolustosatorres52855 ай бұрын

    Couldn't you simply have done r/2+l/2 ? Or there is also an issue with that which I'm missing?

  • @laujimmy9282

    @laujimmy9282

    5 ай бұрын

    Yes you can, just confirmed it via Leetcode 278.

  • @GregorKappler
    @GregorKappler6 ай бұрын

    nice edge case. I wonder if the speed penalty introduced by the extra arithmetic could be reduced by computing the truncated average with right shifts and bitwise AND? like (l >> 1) + (r >> 1) + ( l & 1 & r) (Is this even correct? I did not doubt this enough, maybe.)

  • @jmodified

    @jmodified

    6 ай бұрын

    It is correct and slightly faster than even (l + r) / 2, but we can do better than that. Integer.MAX_VALUE + Integer.MAX_VALUE does not overflow if we view the bit patterns as representing unsigned ints. So we can simply do: (l + r) >>> 1 Summing all l in: 0 So (l + r) / 2 had a speed penalty to begin with.

  • @GregorKappler

    @GregorKappler

    6 ай бұрын

    @@jmodified beautifully simple. Thanks for making the effort to benchmark! I congratulate you for your enthusiasm!

  • @wybren

    @wybren

    5 ай бұрын

    @@jmodified I was looking for this answer.

  • @danielschmider5069
    @danielschmider50695 ай бұрын

    Why not use l/2 + r/2 ? is it only because two separate divisions are computationally more intensive?

  • @MilanFlower-dk5cm

    @MilanFlower-dk5cm

    2 ай бұрын

    Because if l=1 and r=3, you get m=1 then r=0 and everything crashes since r

  • @labor4
    @labor46 ай бұрын

    what if the input itself is too large? (is this a different topic?)

  • @Krzmbrzl

    @Krzmbrzl

    6 ай бұрын

    It can't ever be. If it was, it had already overflown and thus the bug would be somewhere else in the code.

  • @kiwy1257
    @kiwy12575 ай бұрын

    wouldn’t l/2 + r/2 also work? it is easier to remember, but is it more costly to compute?

  • @volodumurkalunyak4651

    @volodumurkalunyak4651

    5 ай бұрын

    It needs to be at least l/2 + r/2 + l&r&1

  • @brantwedel
    @brantwedel6 ай бұрын

    I wonder if any languages have a compiler optimization that tries to simplify mathematical operations: so it would take "l + (r - l) / 2" and turn it back into "( l + r) / 2" 🤔

  • @0LoneTech

    @0LoneTech

    6 ай бұрын

    That style of optimization exists, e.g. in gcc's -ffast-math, which enables some for floating point processing. However, the buggy code has undefined behaviour which the corrected code does not, so this erroneous change should not be produced by an optimization pass.

  • @asagiai4965

    @asagiai4965

    6 ай бұрын

    I think you made the code more complicated and expensive by doing that.

  • @antonliakhovitch8306

    @antonliakhovitch8306

    6 ай бұрын

    Generally speaking the answer is no. Compiler optimizations shouldn't change the behavior of the code, so something like this would be considered a bug. Optimizations WILL do things such as replacing multiplication with bitshifting when possible, or reordering math when it doesn't make a difference (for example, multiple consecutive addition operations)

  • @JarkkoHietaniemi

    @JarkkoHietaniemi

    6 ай бұрын

    @@antonliakhovitch8306 That optimization would be correct only if the numbers are ideal numbers, behaving exactly like math says. The computer language integers do not do that, they operate in modulo arithmetics.

  • @antonliakhovitch8306

    @antonliakhovitch8306

    6 ай бұрын

    @@JarkkoHietaniemi Multiple consecutive additions are fine to reorder, even with overflow

  • @anon_y_mousse
    @anon_y_mousse6 ай бұрын

    I guess I didn't notice you writing it that way in the first video, but I've known about this weird way of getting half the distance between two indices for a long time and specifically avoided it because of this known issue. I generally use C and adding a single extra instruction that takes one clock is not a big deal for a compiled language. Even at a billion entries it's more than fast enough to not warrant worrying about it. This is one of those edge cases that can ruin your year and is probably one of the reasons that Knuth came up with that stupid phrase. I guess I should've paid more attention in that first video and chided you for it, but I was fighting my Python installation that didn't want to use numpy. Bad Mike!

  • @johnniefujita
    @johnniefujita6 ай бұрын

    I remember this error.... probably the most famous bug of all time

  • @KaiKunstmann
    @KaiKunstmann4 ай бұрын

    If your machine model uses two's complement to represent negative numbers (i.e. left bit for the sign, like almost every computer; Java even requires it on a language level), then another solution would be, to replace the division-by-2 by an unsigned shift-right operation. This works, because an addition of non-negative numbers can at most overflow by one bit, i.e. into the sign-bit, but never beyond that, e.g. 01111111+01111111=11111110. An unsigned shift-right operation by 1 always shifts-in a zero on the left, thereby undoing the overflow while being mathematically equivalent to a truncated division-by-2 (exactly what we need), e.g. 11111110>>>1=01111111.

  • @gabedavis3294
    @gabedavis32945 ай бұрын

    Would l/2 + r/2 also work in this case?

  • @K2dawilla
    @K2dawilla2 ай бұрын

    Why the run around, isn't this simply (L/2 + R/2)?

  • @Vincent-kl9jy
    @Vincent-kl9jy5 ай бұрын

    why not just change the order of operations? (R/2 + L/2)

  • @sasho_b.

    @sasho_b.

    Ай бұрын

    Rounding. But yes.

  • @MartinBarker
    @MartinBarker5 ай бұрын

    It's actually not a more complicated process it's actually less computationally complex, divides are computationally complex so the smaller your divide is the better (L+R)/2 will always be larger than (R-L)/2, division in a computer is done with N number of additions, so the the new method is actually much more simple for a computer to perform ((R-L)/2)+L