Would You Play this Bayesian Betting Game? | Quant Interview Questions

After drawing 6 black balls and 4 white balls from an urn with an unknown number of black and white balls, you are offered a bet. If the next ball is black, you get a payout. If the next ball is white, you lose everything. Would you take this bet? Are things as simple as they seem?
Quant Interviews are full of interesting mathematical brain teasers, and puzzles. Learn about probability, statistics, and computer science by giving these problems a go!
If you enjoyed this video, smash that like button, hit share, and subscribe! You can also follow us on social media with the following links:
Instagram: / thecuriositytheorem
Twitter: / curiositytheorm
Reddit: / curiositytheorem
#Quant #Interview #Internships #Research #Trading
Endcard Music from #Uppbeat (free for Creators!):
uppbeat.io/t/pryces/music-is
License code: 6OHNXTYFEAI29CMY
Music by Vincent Rubinetti
Download the music on Bandcamp:
vincerubinetti.bandcamp.com/a...
Stream the music on Spotify:
open.spotify.com/playlist/3zN...

Пікірлер: 11

  • @CuriosityTheorem
    @CuriosityTheorem24 күн бұрын

    Did you get this problem right? How do you think it would change with more draws -- at what point do the frequentist and bayesian approaches become close enough to not worry about sample size? Let me know in the comments below!

  • @CheckmateSurvivor

    @CheckmateSurvivor

    23 күн бұрын

    I just became your 666th subscriber. Are you going to do anything about that?

  • @jeremiahreilly9739
    @jeremiahreilly973920 күн бұрын

    Fascinating, challenging.

  • @dansheppard2965
    @dansheppard296523 күн бұрын

    I agree that this is the Bayesian answer but as almost always with this approach the choice of prior is very dodgy. It is easy to create models satisfying the question where the distribution where the assumption of equal likelihood is far from equal. Certainly they are both "unknown", and it would seem that assuming something about black and white balls to bias the prior would count as "knowing" something about them, but assuming equal likelihood in the prior is also "knowing" something about the balls, or the general setup of this universe -- there is never any justification for assuming initial equanimity.

  • @here_4_beer

    @here_4_beer

    23 күн бұрын

    well sometimes it makes sense to have a biased prior over a flat one, like spam filters are more conservative by assuming that ~70% of mails are apriori spam. The thing is if you do not use bayes you still use it but with a flat prior, so total ignorance. Whereas when you have a prior you start from some knowledge anyways in both cases you may update your prior over time.

  • @typha

    @typha

    23 күн бұрын

    Agreed, the standard assumption that all is equal comes from the philosophers of science and has no place in pure mathematics. To go further, the problem actually says "An urn contains an unknown number of black balls and an unknown number of white balls", in other words the urn does contain a nonzero number of both*, so p=0 and p=1 should not be options at least. And in practice we could probably place some sort of bound on the number of balls in the urn. I feel like here we have snuck in the assumption that it is possible that the urn could be more massive than the earth. For instance there could probably not be more than a trillion before the 'balls' would have been called 'sand' and the 'urn' would have been called a 'small earthen building', but of course seeing it would give us a better idea of how many balls would be possible. *assuming 'contains a number of' could not be referring to the number 0, in which case we would have known this without being told, and we should also consider the unknown number of yellow balls.

  • @here_4_beer

    @here_4_beer

    22 күн бұрын

    ​@@typha ok, to clarify there is alot not metnioned in this video, not menioning that it is idealized and a model for education. For instance: the std. error which in Poissonian statistics is 1/sqrt(n), means if you pull out 100 times from the urn your error is 1/sqrt(100)=1/10 so 10% which impacts any statistical quantity sampled (in this case p). He has drawn what 10 times? Anyway, the larger the dataset gets, the smaller the standard error becomes. If you look in the "german tank problem" for example you will find a derivation with error bars provided - and there the model is prone to fail at too few samples. However the derivations can become lengthy and irritating.. Hope this helps a bit.

  • 24 күн бұрын

    After watching only the problem statement: Whether the game is fair depends on the knowledge that John has and/or whether he always offers to play this game. If John knows that there are many more white balls, and only offers the game when he gets lucky in drawing more black balls, you should not play. In general, if someone _offers_ you a game (or a trade), you should assume that they only do that when they think they get an advantage out of it. If it's a win-win situation, that can be fine. But otherwise, be careful.

  • @gernottiefenbrunner172

    @gernottiefenbrunner172

    18 күн бұрын

    Even if John knows nothing about the actual distribution, he can always make you bet on whatever color was more common in his _tiny_ sample, and ever so slightly win on average (with shrinking margin for more skewed actual distributions).

  • 18 күн бұрын

    @@gernottiefenbrunner172 No. If the distribution is extremely skewed, his margin doesn't just diminish, it quickly turns negative. Eg to give a really trivial distribution, suppose that with 100% probability the distribution is 9 to 1. (But John doesn't know this.)

  • @alonamaloh
    @alonamaloh22 күн бұрын

    In principle you would have to integrate the function 2x-3(1-x) = 5x-3 with a weight given by the Beta(7,5) distribution. The answer you get is the same (-1/12) as using the expected value of the beta distribution, as you did in the video. But this only works because the function we are integrating is a degree-1 polynomial.