How Razorpay scaled their notification system
Ғылым және технология
System Design for SDE-2 and above: arpitbhayani.me/masterclass
System Design for Beginners: arpitbhayani.me/sys-design
Redis Internals: arpitbhayani.me/redis
Build Your Own Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
Sign up and get 40% off - app.codecrafters.io/join?via=...
In the video, I discussed the criticality of notification systems for fintech companies like Razer pay. I delved into Razer pay's notification system architecture, emphasizing the importance of timely notifications for transactional integrity. I highlighted the challenges faced by Razer pay in scaling their notification system, such as database bottlenecks and handling peak loads during special events. I detailed the incremental changes Razer pay made to their architecture, including prioritizing incoming loads, implementing separate queues with different priorities, and addressing database scalability issues by writing asynchronously to Kinesis. Additionally, I underscored the significance of observability in ensuring system health and meeting SLAs. This comprehensive approach to system design and optimization is crucial for delivering a seamless user experience, as demonstrated by Razer pay's notification system overhaul.
Recommended videos and playlists
If you liked this video, you will find the following videos and playlists helpful
System Design: • PostgreSQL connection ...
Designing Microservices: • Advantages of adopting...
Database Engineering: • How nested loop, hash,...
Concurrency In-depth: • How to write efficient...
Research paper dissections: • The Google File System...
Outage Dissections: • Dissecting GitHub Outa...
Hash Table Internals: • Internal Structure of ...
Bittorrent Internals: • Introduction to BitTor...
Things you will find amusing
Knowledge Base: arpitbhayani.me/knowledge-base
Bookshelf: arpitbhayani.me/bookshelf
Papershelf: arpitbhayani.me/papershelf
Other socials
I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
LinkedIn: / arpitbhayani
Twitter: / arpit_bhayani
Weekly Newsletter: arpit.substack.com
Thank you for watching and supporting! it means a ton.
I am on a mission to bring out the best engineering stories from around the world and make you all fall in
love with engineering. If you resonate with this then follow along, I always keep it no-fluff.
Пікірлер: 78
we can have SNS in the place of Limiter and integrate with SQS. For Ordering, we can use, SNS FIFO and SQS FIFO. Since SNS and SQS are fully managed services, we can somewhat avoid the rate limiter concept. We can apply SNS filtering rules to push the events to respective SQS based on the filtering rules. Along with this, we can have individual DLQ to SQS so that worker (AWS Lambda) can check the DLQ and process the messages. This will help reduce the Latency and corn job work.
Razorpay Engg here.. one thing that was critical for us is ensuring we consume all the events published by the clients - there are two important things we implemented - an Outbox pattern on the publishing side and second the API layer doesn't write to the db since that can become another bottleneck. You can read about the Outbox pattern in another blog we have written which is a critical component to scaling a microservices architecture. If you're interested we can come talk about this on your channel too.
@AsliEngineering
Жыл бұрын
I would love to host you. Although I have never had a guest on my channel, it would be fun to do a deep dive (so long as Razorpay permits) on the design. Let me know, once you are comfortable. You can reach out via LI or Twitter twitter.com/arpit_bhayani www.linkedin.com/in/arpitbhayani/
@y5it056
Жыл бұрын
@@AsliEngineering we can do it officially. Someone will reach out
@vyshnavramesh9305
8 ай бұрын
Does outbox pattern suit in this video's notification system? I understand it suits to communicate transactional domain events across microservices. But I can't see it suits here.
@y5it056
8 ай бұрын
@@vyshnavramesh9305 for us, webhook delivery to the merchant's system is a critical part of the payment flow. We need to guarantee at least once delivery. Hence, we have to ensure that the payment system's events reach the notification platform. From there the notification platform ensures at least once delivery. You can't believe how many messages get lost over the network at this scale.
Explained in such a layman language … never imagined I could understand such a complex architecture in span of 15-17 min .. This content is too good to be free… Kudos to you :)
@StingSting844
Жыл бұрын
Stop giving ideas for monetization man😂
@ranganathg
5 ай бұрын
@@StingSting844 yeah ;-)
Great stuff. Thanks for teaching these things in simple terms 🎉
Quality content! The rate limiter to mellow down the spikes from a single user was a good learning. Thanks for putting out these videos. Love the passion with which you explain things.
Man hands down this is just so awesome. And you have nailed it!! Thanks a ton
Good content ! Couple of suggestions - 1) Justifying choice of kinesis over SQS (again) for async writes would help since cost is important criteria in the design. 2) Re-usability of the "task prioritization" module across new requests and scheduling failed ones for retries - Does it make sense to move it to a separate microservice/API ?
What a content man !! Hats off
Man Hats off to you please don't stop putting up videos like these, You and Gaurav Sen are legends, No where in the KZread did I found content similar to you guys 🙂🙂Keep Bringing more System Design Videos
Very well explained, please keep posting such videos ❤
#AsliEngineering is happening here .. no need to go anywhere .. Kudos to your content Man! Thanks!
Thanks Arpit 🙏
Excellent Session
I feel Database in the end of flow can be removed if the information can be re-constructed from source DBs. Dead-Letter Queues can be the best option, Scheduler can act on Dlqs.
Awesome explanation, definitely purchasing your course
@AsliEngineering
Жыл бұрын
Thank you. Looking forward to it ✨
Hey Arpit, great video! One question though: At about 7:20, you discuss how read load would increase during peaks. But I don't see how the solution that has been implemented would address this issue. The solution would address write loads since we are making asynchronous writes to the DB. But read loads would still be high, since worker / executors might need info from DB to process the event. Please correct me if I am missing something 🙂 Edit: Is it a secondary effect? By reducing write load using async behaviour, are we freeing up more IOPS bandwidth for reads?
@AsliEngineering
Жыл бұрын
Yes. It is a secondary effect. You free up IOPS to do reads while async workers are doing staggered writes.
It was really loving it
Great explanation
Thanks for the awesome video. I think sending mail and recording it in DB is sort of a distributed transaction (there are pattern like outbox pattern which can solve this problem) and hence it might play a role in the scaling strategy of the system.
@hc90919
Жыл бұрын
Do you have any resources for the out box pattern?
Great video and thanks for making it! A quick question for db choosing, is it required for picking SQL for Db?
Thank you 👏
Instead of using kinesis, SQL db and scheduler, can we introduce retry SQS queues which would be picked up by the workers
USP of this channel is Short, meaningful content - no hour-long videos.
@AsliEngineering
5 ай бұрын
Thanks Vanshika 🙌
I see some concerns/doubts with this design - 1. SQS queues ensure atleast once delivery, not exactly once right ? Hence they must ensure that their notification system handles duplication, else customer will get a shock if he receives 2 debit notifications for 1 transaction 2. If their workers are lambdas, if huge number of lambdas are triggered from huge number of messages in the SQS, now if these lambdas are doing anything else like calling some service/reading from DB etc, I am sure it will throttle that service, how do they handle that or is it not the case. Because once the lambdas spring up there is no way for one to know how many others are actively calling a downstream, so some control is needed at the event source side 3. Since this entire process is asynchronous, is their API also asynchronous and if so, just curious how do they make their public apis asynchronous, is it pub sub based/polling based/ web hook kind of thing? After what time client retries if the process fails or they ensure 100% delivery? 4. How is this schedular designed? Is it cron job that goes over the DB once an hour to check failures? If thats so, it’s introducing a lag in retrying, why cant they use a no sql db like Dynamo Db of AWS and utilise dynamo db stream events which will immediately trigger a lambda if there is a failure and it will send the message for retry, converting to a trigger based soln can get rid of the latency 5. Why a sql db for just maintaining event status, why not a no sql db like dynamo? Is my sql serverless? Or they are handling the maintainance part themselves which increases the Oncall load
at times, I find it hard to keep up with the videos :) can't even fathom how you manage to read, try and share so much cool engineering stuff outside work. 👏
@AsliEngineering
Жыл бұрын
I am realizing this and hence starting next week I am chopping freq to 2 per week :)
@adianimesh
Жыл бұрын
@@AsliEngineering much appreciated :) pls do not reduce the frequency any further though. also please do a little bit on how to be productive outside work . "A day in the life of a normal curious software engineer" pun intended.
@AsliEngineering
Жыл бұрын
@@adianimesh hahaha :) a lot of people have asked for this but it is very hard for me to record such a video. I just don't want to put out a narcissistic video :D I stay away from anything that holds a potential to distract me :) By any chance if that videos get a big traction, I will be tempted to take that route and hence I typically avoid. I hope you understand. But yes the short one line answer to this is PASSION. I am extremely passionate about the field and have a huge bias for action.
@adianimesh
Жыл бұрын
@@AsliEngineering u are an inspiration for me :) thanks
I have a question. Why have a Mysql in the new solution. Can't Kinesis directly plug into the scheduler ? (or is it like scheduler is persisting the jobs so that if the server is restarted, it can still reschedule lost events)
I see we could replace sqs with event bus like Kafka itself that way it can be used for persistence also. I don't see the necessity to store it on message queue and again on event bus. Thoughts?
@AsliEngineering
Жыл бұрын
Yes. even SQS persists. for this use case, Kafka/SQS would have given a similar performance. But Kafka would be costlier.
It's a very common architecture
Excellent Topic! Slightly off-topic question! Which tool do you use to record/edit videos! Is it Loom?
@AsliEngineering
Жыл бұрын
OBS
How will the read load be mitigated by async calls? If data reaches db slowly/asynchronously, won't the systems dependent on it will again be slowed down? What's the work around here? DB scaling? scharding?
@ramnewton8936
Жыл бұрын
Yeah, was wondering the same. I think the solution doesn't use async calls for reading, it uses it only for writing. But that said, I still don't get how read performance would improve during peak. Only reasonable explanation that comes to mind is: Maybe since write load is reducing due to async behaviour, DB might have more IOPS bandwidth for reads. I'm not sure if that explanation is valid though 😅
Why not just do acknowledgement after success response of notifications, so you don't have to worry about writing to database, if that fails, it can be queued again as database is very hard to scale but these queueing services like kafka is highly distributed and scalable?
@satyamshubham6676
Жыл бұрын
I think for audit purposes, but that definitely doesn’t need to be synchronous. Only the failures ones can be synchronous.
I'm assuming there will be data loss if at all sqs is unavailable. Something like cdc pipeline may mitigate this issue. Even though, cdc is typically used for data integration only. Thoughts?
@AsliEngineering
Жыл бұрын
could have used CDC but with extra filters and edge case handling. Keeping systems simple is important in real world.
why mysql ??
How would iops to mysql reduce by introducing kinesis? If the data produce rate to mysql is less than data produce rate to kinesis, wouldn't this choke kinesis?
@AsliEngineering
Жыл бұрын
Staggered consumption
Bhaiya what are workers that you mentioned here?
@parthpathak5712
Жыл бұрын
Worker will pick up (consume) the multiple events & executor will write the data/message in relational db & obviously push the notification as well.
But won't the Reads be impacted when we are using Kinesis (asynchronous writes)?
@AsliEngineering
Жыл бұрын
We do not need consistent reads here.
@shantanutripathi
Жыл бұрын
@@AsliEngineering Yeah, realized later.....but why they were using synchronous writes in the first place..😅
@AsliEngineering
Жыл бұрын
@@shantanutripathi no one thinks about optimization on Day 0. It is all about shipping and getting things done
What about the latency you’re introducing in the system due to kinesis?
@AsliEngineering
Жыл бұрын
Why is that a problem?
What tool you use for drawing architecture...I need it to present in interview
@AsliEngineering
Жыл бұрын
GoodNotes
Hi, can u please explain how to solve SES rate limiting issue at scale, because SES has a rate limiting for sending emails like 14 emails/sec. Because I got a scenario in a startup where i need to send a marketing emails to 50k users exactly on Sunday morning.
@AsliEngineering
Жыл бұрын
Talk to Razorpay. It is artificial rate limiting.
@nettemsarath3663
Жыл бұрын
How can I send emails 50k emails on Sunday morning using SES ( SES has rate limiting ) how can I achieve it.
I know there are prioritized queues that are used here, so it is that until P0 is consumed P1 would not be consumed? What if P1 consumption is in progress and consumer is blocked for sometime, how does the system makes sure at this time that P0 should be picked up?
@AsliEngineering
3 ай бұрын
They are all consumed in parallel. Just the number of consumers would vary.
@raj_kundalia
3 ай бұрын
@@AsliEngineering but then Kinesis is still a queue right? How does the system make sure that the important ones go before anything else?
@AsliEngineering
3 ай бұрын
@@raj_kundalia different priorities are different topics.
@raj_kundalia
3 ай бұрын
@@AsliEngineering makes sense, thank you for replying. Big fan and learning every day from you :)
@asliengineering - aren’t there consumers on Kinesis which are actually writing to db? How is kinesics able to write to db directly? Are there gng to be any lambda functions triggers?
@AsliEngineering
Жыл бұрын
There are consumers consuming and writing to db.
@hc90919
Жыл бұрын
@@AsliEngineering - Got it. Are the consumers going to be services on a physical servers or they cron job programs running every night?
Video starts @2:38
How a single consumer can consume events from 3 SQS queue ???
@AsliEngineering
Жыл бұрын
Multi-threading.