7: Design a Rate Limiter | Systems Design Interview Questions With Ex-Google SWE

Ғылым және технология

Discussion Question: Is edging a form of rate limiting?

Пікірлер: 74

  • @isaacneale8421
    @isaacneale84214 күн бұрын

    Good discussions. Very helpful in preparing for upcoming interviews. A few things I don’t understand about the solution: 1) With a distributed rate limiter sharded on userId/IP address like you’ve proposed here, I can’t see the need for read replicas. Every operation is probably a write operation. That’s under the assumption that the vast majority of requests don’t get throttled and this put a timestamp in a queue. Read replicas would only be helpful when the request is throttled. So I think if we want 100% accuracy, we can’t do leaderless replication. But actually I would argue that there are a lot of cases where we would prefer speed over accuracy. And if our scale is high enough where requests come so often a single machine doesn’t have the IO to handle those requests (or the blocking data structures are under too much pressure), to support higher throughput we would need to compromise accuracy. By allowing writes to go to multiple machines, we can scale much higher in write throughput. The loss is accuracy. We also then need to share that information between nodes. Perhaps with a leader, perhaps not. We can also use gossip style dissemination. 2) I can’t understand how the cache would work on the LB. This would I presume be for throttled requests only. I suppose the rate limiter could return the earliest time that the next request would succeed and the cache would be valid only until then. Is that the idea? Another good thing to talk about would be UDP vs TCP here, which again falls into the accuracy-speed tradeoff here. Overall great discussion here in the video, maybe some of these points could further someone else’s thinking on the problem space while preparing.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 күн бұрын

    1) Fair point - you may not want to read from that replica but you do still want a replica in case you have to perform failover. Agree with your point regarding a multi leader/leaderless setup, that can be feasible if our limits are more loose. 2) It's just a write back cache for certain users to do all of the rate limiting logic like normal, but in the load balancer node, so that we can avoid a network call. 3) Yeah I think introducing UDP in a problem where dropped/unordered data is really not acceptable may end up with us just finding ourselves reimplementing TCP lol, but I think that's a good thing to bring up!

  • @ingenieroriquelmecagardomo4067
    @ingenieroriquelmecagardomo40675 ай бұрын

    Good video! Thinking of applying this "rate limiting" thing to the dudes locked in my basement that keep up my daily leetcode streak.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    You should consider evicting them from your basement (and replacing them with fresh entries)

  • @ziyinyou938
    @ziyinyou938Ай бұрын

    Big thanks from another Googler 🙏

  • @rahulrollno.-0689
    @rahulrollno.-06895 ай бұрын

    Thanks for this Video too Sir...

  • @gangsterism
    @gangsterism5 ай бұрын

    edging is backpressure management

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    Good point!

  • @mickeyp1291
    @mickeyp12915 ай бұрын

    re pimples, i had a pizza face growing up its an oil issue. half a lemon, half a grapefruit half an orange tsp olive oil (so you dont get an ulcer) in tge blender, daily. at night dab of toothpaste on each itll dry them up, after two weeks max 1 month youll need a new reason to not get some. very nice videos im enjoying them immensly

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I appreciate it! Yeah I eat a ton of dairy due to lifting (whey protein agh), but what can ya do, I don't really care too much

  • @yrfvnihfcvhjikfjn

    @yrfvnihfcvhjikfjn

    5 ай бұрын

    Just get whey isolate you noob

  • @shameekagarwal4872
    @shameekagarwal48723 күн бұрын

    amazing jordan! i remember seeing sorted sets of redis for the sliding window, while you use linked lists your solution does have better TC but maybe they use sorted sets because requests can reach the rate limiter "out of order"? not sure why they would overcomplicate the solution otherwise?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 күн бұрын

    Nor am I!

  • @user-cj5ig7sr8s
    @user-cj5ig7sr8s3 ай бұрын

    correction at 14:19 : it should be memcache, you typed it memache'd'. Which is a persistent storage based service based on memcache.

  • @Goat_-sx1cy
    @Goat_-sx1cy5 ай бұрын

    At this point, you should just double down on the memes man, love that stuff!

  • @Goat_-sx1cy

    @Goat_-sx1cy

    5 ай бұрын

    also, pp :p

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    ᕦ(ò_óˇ)ᕤ

  • @12akul
    @12akulАй бұрын

    Hi Jordan. Great video as usual! I had one question. Can't we use Leaderless replication with partitioning to send a particular userId/IP to the same node everytime?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    Ай бұрын

    Sure but then you really just described single leader replication with partitioning lol

  • @8partak
    @8partak5 ай бұрын

    Regarding a need to use locks to sync access to lists in Redis (for the sliding window case) It might be possible to move list update logic to Redis function or use Redis transaction, which will make execution atomic. Taking to account that Redis is single-threaded there will not be need to use locks

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    Yeah for Redis you're totally right, just wanted to mention this in the general case if someone were to use a multithreaded server.

  • @rakeshvarma8091
    @rakeshvarma80912 ай бұрын

    Nice Video Jordan.. One small question though In the final picture, where are we going to keep the Sliding Window Logic to kick out the expired counts ? Is it inside LB or we will create a new RateLimiting Service which uses Redis ?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    2 ай бұрын

    Wherever we're doing the rate limiting - in reality it could very well just be a custom server that we've deployed with our own code and stuff will just live in memory.

  • @ShivangiSingh-wc3gk
    @ShivangiSingh-wc3gk2 күн бұрын

    We dont necessarily need our rate limiter to be available all the time? - 1 we have the cache at the load balancer - the rate limiting as a service is not on the load balancer so if it is down for sometime it wont affect the service as a whole

  • @jordanhasnolife5163

    @jordanhasnolife5163

    2 күн бұрын

    Yeah, but we do want it up as much as possible to avoid spammers

  • @sahilkalamkar5332
    @sahilkalamkar53323 ай бұрын

    Hi, was wondering if Rate limit rules will come into discussion. For this particular endpoint, these many requests are remaining for this ip). So these configurations need to be stored somewhere right? Probably a DB? Also correct me if I am wrong, in Redis, the running counters of rate limit are stored right? Like, 5 requests have been exhausted. Also how are we refreshing the limits here, say after a minute has passed I need to reset the limits right?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 ай бұрын

    If we're doing fixed size windows, our counts get reset on the new (hour, minute, whatever time window). If we're using sliding windows, a background process will expire requests that are outside of our window.

  • @sleekism
    @sleekism5 ай бұрын

    In the sliding window algorithm, did you say someone could make the design decision of always adding every request to the list EVEN if the request was outside the window and would otherwise be invalid? could you explain that please or clarify if that wasn't what you meant

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    Ah sorry, no, I think what I meant there is when you hit your rate limit with the sliding window you can: 1) not add events over the limit to the linked list (this will mean that you're only bottlenecked by the existing events in the linked list) 2) add them to the linked list (now you have to wait for all of them to expire before you can make more API calls).

  • @prinzmukka
    @prinzmukka4 ай бұрын

    Jordan, thank you for the great content. Could you please share the slides used for all sys design 2.0 videos?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 ай бұрын

    Hey! I will upload them in bulk when the current series is done :)

  • @jiananstrackjourney370
    @jiananstrackjourney3704 күн бұрын

    Great video, can the linkedList be a binary search tree instead? Inserting element is slower but when you try to remove from it, it takes O(logn) instead of O(n), you would have to balance the binary search tree once in a while in O(n), but not always.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    4 күн бұрын

    I believe in our case we're only removing from the head of the linked list, which is why I use that

  • @susiebaka3388
    @susiebaka33885 ай бұрын

    Will they ask to implement algorithms in an interview? I have boilerplate for sliding window in redis Lua and Ive used nginx's default which I think is leaky bucket... implementation isnt hard just some cli stuff. In an interview how likely do you think they would ask for details about the algorithm itself?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I doubt they'd want you to go *that* in depth, but they may ask for some high level details, I'm not entirely sure.

  • @debarshighosh9059
    @debarshighosh90593 ай бұрын

    One doubt, at 7:48 you have put the services behind a load balancer. Can a load balancer distribute load among separate services? Isn’t that the job of API gateways?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 ай бұрын

    I guess depends on the implementation but sure I'm fine using the term API gateway

  • @user-cm3pm4oh9j
    @user-cm3pm4oh9j5 ай бұрын

    Hi there awesome video! I have a doubt. How will the load balancer cache help ? The rate limiting data must reset after specified time frame so how will the cache be updated unless we let the request pass through the loadbalancer to the RL severs.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    What do you mean it must be reset? The load balancing cache can basically just run the same exact code as the rate limiter would. That way it can act as a write back cache and you can avoid an extra network call.

  • @user-cm3pm4oh9j

    @user-cm3pm4oh9j

    5 ай бұрын

    @@jordanhasnolife5163 Okay got it. Thanks for your response.

  • @zen5882

    @zen5882

    3 ай бұрын

    How would that work? Like a subset of the requests can just be fulfilled by the load balancer? The rest go on to do the network call?@@jordanhasnolife5163

  • @jasdn93bsad992
    @jasdn93bsad9922 ай бұрын

    which app do you use for the white boarding in your video?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    2 ай бұрын

    OneNote

  • @utkarshkapil9272
    @utkarshkapil927215 күн бұрын

    golden

  • @jordanhasnolife5163

    @jordanhasnolife5163

    15 күн бұрын

    Shower

  • @Snehilw
    @Snehilw5 ай бұрын

    Wondering if you have all the iPad notes stored somewhere for quick revision before an interview.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I do yeah - apologies for being a bum in terms of uploading these but I'll get to it soon enough, probably when I finish this series so that I can upload them in batch

  • @Snehilw

    @Snehilw

    5 ай бұрын

    @@jordanhasnolife5163 Thanks man, yeah that would be very helpful for sure. Looking forward to these.

  • @GeorgeDicu-hs5yp
    @GeorgeDicu-hs5yp4 ай бұрын

    you dont talk about costs in any of these videos. cost is a very imp. aspect which can redefine the entire solution. But good video, you bring new dimensions to my thought process.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    4 ай бұрын

    Agreed on the cost part, and this is certainly true IRL. Though to be fair, I don't think that most systems design interviews are expecting you to have a concrete idea of the costs of your solutions. Though at a high level, I agree that you should probably have an idea when making designs which areas are costly/could potentially be improved upon.

  • @jjlee4883
    @jjlee48834 ай бұрын

    This is a beginner question, but how does sharding with single leader replication work? Does each shard range of databases have their own leader?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    4 ай бұрын

    Yep!

  • @LawZist
    @LawZist5 ай бұрын

    I'm curious how you would design a real time bidding system, or multiplayer game server

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I do have a video on the game server! Just have to remake it at some point. For bidding, probably some partitioned redis cache on auctionId, you'll have to use atomic operations to increase the bid.

  • @LawZist

    @LawZist

    5 ай бұрын

    @@jordanhasnolife5163 i will look for the game server vid! Would you use kafka and flink to process the bidding requests? Would you stream process it or batch process it? And how would you update and show the users the last bidding if it constantly update? Thanks in advance 🙏🏻

  • @user-wj1wy6ph5q
    @user-wj1wy6ph5q5 ай бұрын

    🙇

  • @suyashsngh250
    @suyashsngh2502 ай бұрын

    9K views damn dude you are blowing up

  • @jordanhasnolife5163

    @jordanhasnolife5163

    2 ай бұрын

    My toilet at least for sure

  • @akbarkool
    @akbarkool4 ай бұрын

    I don't understand the point of a cache on top of redis. Won't every request require us to update the cache to get the latest count from the redis store? Cache would make sense if it was read heavy I think.

  • @jordanhasnolife5163

    @jordanhasnolife5163

    4 ай бұрын

    The cache is on the load balancer, so it helps us avoid an additional network call. If it's a write back cache, that means it is the source of truth, hence we don't have to go to the redis store.

  • @akbarkool

    @akbarkool

    4 ай бұрын

    Yes but doesn't every cache hit require us to update the rate counter by 1?@@jordanhasnolife5163

  • @davidoh0905

    @davidoh0905

    14 күн бұрын

    @@jordanhasnolife5163 does write back cache just mean that lookup logic lives in LB and rate limiter service is just responsible for writing the data into Redis? so separating the responsibility between LB and Rate Limiter?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    12 күн бұрын

    @@davidoh0905 It just means that a subset of the rate limiting data will live in the load balancer, as opposed to redis.

  • @davidoh0905
    @davidoh090514 күн бұрын

    Shouldn't the Rate Limiter be part of API Gateway?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    12 күн бұрын

    In practice, probably. Or at least the write back cache part of it should be.

  • @titusandronikus1337
    @titusandronikus13373 ай бұрын

    no flink? wtf. who are you and where’s our boy jordan??

  • @jordanhasnolife5163

    @jordanhasnolife5163

    3 ай бұрын

    Rare Flink L, Redis paid me off

  • @rajatahuja6546
    @rajatahuja65465 ай бұрын

    can you share notes if possible ?

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I will get to it eventually! Will post on my channel about it when I do

  • @LawZist
    @LawZist5 ай бұрын

    Can you do goatee tutorial? Thanks

  • @jordanhasnolife5163

    @jordanhasnolife5163

    5 ай бұрын

    I don't think you want that from me - how about a talking to no women tutorial I'm pro at that

  • @LawZist

    @LawZist

    5 ай бұрын

    🤣@@jordanhasnolife5163

Келесі