How Grab configured their data layer to handle multi-million database transactions a day!
Ғылым және технология
System Design for SDE-2 and above: arpitbhayani.me/masterclass
System Design for Beginners: arpitbhayani.me/sys-design
Redis Internals: arpitbhayani.me/redis
Build Your Own Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
Sign up and get 40% off - app.codecrafters.io/join?via=...
In the video, I discussed how Grab manages millions of food and Mart orders daily, focusing on the critical database infrastructure. I explored the high-level architecture of Grab's order platform, emphasizing high availability, stability, and performance at scale. Additionally, I introduced a system design course with a practical approach for engineers to learn real-world system building. The key points covered Grab's design goals of stability, cost-effectiveness, and consistency, along with the architecture of transactional and analytical databases using DynamoDB and MySQL.
Recommended videos and playlists
If you liked this video, you will find the following videos and playlists helpful
System Design: • PostgreSQL connection ...
Designing Microservices: • Advantages of adopting...
Database Engineering: • How nested loop, hash,...
Concurrency In-depth: • How to write efficient...
Research paper dissections: • The Google File System...
Outage Dissections: • Dissecting GitHub Outa...
Hash Table Internals: • Internal Structure of ...
Bittorrent Internals: • Introduction to BitTor...
Things you will find amusing
Knowledge Base: arpitbhayani.me/knowledge-base
Bookshelf: arpitbhayani.me/bookshelf
Papershelf: arpitbhayani.me/papershelf
Other socials
I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
LinkedIn: / arpitbhayani
Twitter: / arpit_bhayani
Weekly Newsletter: arpit.substack.com
Thank you for watching and supporting! it means a ton.
I am on a mission to bring out the best engineering stories from around the world and make you all fall in
love with engineering. If you resonate with this then follow along, I always keep it no-fluff.
Пікірлер: 83
Falling in love with your Content Arpit - Breaking complex topics intro smaller chunks and explaining it like explaining to a beginner is your strength. Keep bringing such content!
seriously your videos are so informative for a software engineer, it is really a gold mine for us please continuing making such amazing videos.
feeling proud of myself after watching this i implemented a similar thing a few days back on dynamodb, i found the dynamodb docs helpful where they suggested use of global secondary index for filtering
Thanks Arpit fr explaining such a brilliant architecture!!
Awsome from Grab, Now I can use the same concept in interview and tell this will works
Amazing Arpit. Easy and powerful! Thank you!
Brilliant architecture. Thanks for explaining.
Liking this video half way. Just brilliant explanation. Thanks
Simple and great explanation! Thanks
Fantastic design by Grab.. really loved it.. and most importantly thank you for presenting it in such a simplified way. Love your content
@AsliEngineering
6 ай бұрын
Glad you found it interesting and helpful 🙌
Amazing analysis...mza aa gya 😃
Implementing GSI and updating the data in OLAP was amazing.
Thank you so much for this Banger on 1st, 2023.❤
One question Arpit, Since GSI is eventually consistent, would we get a consistent view of Ongoing orders at any point in time?
Awesome explanation of the concept.
Great explanation Arpit💯
Thanks very much for the video. This is really helpful in understanding how Grab can handle both large and spiky requests coming during rush hour. I just wonder in our company case, we also need to use the historical data to validate the promotion of customers based on their order history. For example, one promotion is only applicable for first time customer (simplest case). In that case, do we need to use the analytical data to calculate this?
Seems exciting
nice explanation! But how orders svc writes in database and in kafka is it async for both or sync?
Where will the query goes if the user wants to see his last 10 or n orders? Analytics Db ? as they have removed entries from gsi on ddb?
Great video. What if DLQ on aws down?
Dynamodb is eventual consistent and not strongly consistent right? If they needed strong consistency they would want to shift to postgres or something right?
Amazing content. Quick question: Updating the table based on timestamp will not be reliable right - in a distributed system we cannot rely on system clocks for ordering messages.
@dhruvagarwal7854
Жыл бұрын
Can you give an example where this scenario could occur? Unable to understand this.
AFAIK If we are not able to process an SQS message, it goes to DLQ. If SQS is down DLQ will also be down and we will not be able to publish the message there. Great video btw!
@architbhatiacodes
Жыл бұрын
Read the blog and they have mentioned same, "When the producer fails, we will store the message in an Amazon Simple Queue Service (SQS) and retry. If the retry also fails, it will be moved to the SQS dead letter queue (DLQ), to be consumed at a later time.". So I think we will not be able to do anything if both Kafka and SQS are down (It might be a very rare event though)
please make video on what kind of database to use in what situation that will be very helpful thanks
@AsliEngineering
Жыл бұрын
I cover that in my course, hence cannot put out a video on it. I hope you understand the conflict, but thanks for suggesting.
Thanks for the content Arpit. Have a small doubt, how are we handling the huge spikes? Does dyanmo hot key partioning + lean gsi index do the job? I am assuming the peak duration will last for sometime since order delivery isn't just done in a 10 minute window. So at peak time even the index would start piling up. Would you say using dyanmo is the cost effective solution here? [I am assuming team wanted a cloud native solution and cost effectiveness involved calculating maintenance cost for in house solution]
@AsliEngineering
Жыл бұрын
DDB is cost effective as well as great at balancing the load. It can easily handle a huge load given how it limits the queries to near KV use case. Also, indexing will not be hefty because the index is lean.
a small doubt, user_id_gsi is stored with user_id when placing an order, what if there are 2 orders from the same user (maybe from different devices or even the same), won't GSI have 2 duplicate entries even if the orders are different
@imdsk28
Жыл бұрын
We can have multiple orders for same index and it’s not a primary key… It’s just to index the data… when you fetch ongoing orders of that user you need to fetch both the orders and through with this index it can fetch those 2 quickly.
@syedaqib2912
Жыл бұрын
Order Ids are always unique for each user
Hi Arpit, 1 quick question. So when we do upsert, 1. operation type whether it would be insert / update is always decided based on availability of primary key in database? 2. when we are doing upsert, we always need to provide all mandatory parameters in SQL query?
@dhruvagarwal7854
Жыл бұрын
1. You can define what is the update field/fields in your query or ORM layer. Doesn't need to be primary key. 2. While doing upsert, always provide ALL the parameters, even those that are not mandatory but have a non-null value, because every field will be overwritten.
@gauravraj2604
Жыл бұрын
@@dhruvagarwal7854 Hi Dhruv Thank you for clarifying. So if I understood correctly, upsert can be decided based on any field but that field should be unique in nature. Is this correct? 2nd point I am clear now. it makes sense to provide all parameters as every parameter is going to be overwritten and we won't want any parameter to be lost.
One pain point of dynamodb is handling pagination during filter and search. It skips the record if it doesn't match the query criteria and we have to run a recursive loop to meet the page limit. Ex. You are running a query on 1000 records with page limit 10 and filter status for in-active users and there are only 2 in-active records and first record is in first row and 2nd record is at 1000th row, now you have to run 100 query iteration in order to get just 2 records As per my understanding this is the biggest disaster on dynamodb. Does anyone have any solutions here? Hi Arpith have you come across this limitation in dynmodb?
@AsliEngineering
Жыл бұрын
Yes. DDB is not meant for such queries, hence should not be used for such cases, unless you can manipulate indexes (LSI and GSI).
How would they get to know just by looking by the former timestamp that its not the latest one? (1. newer update hasn't came yet. 2. would they query transactional db just for that?)
@AsliEngineering
Жыл бұрын
no. You just discard the updates you receive with older timestamp. No need to query anything.
Just a small doubt. As transactional queries are handled synchronously, won't there be any issue while handling huge number(millions) of synchronous writes to db during peak traffic hours.Like there is a possibility that the DB servers can choke while handling them right? BTW loving watching ur videos, great content!
@imdsk28
Жыл бұрын
I believe this will be handled by dynamo DB itself by creating partitions internally when the tiers are hot
@AsliEngineering
Жыл бұрын
Yes. They would slowdown. Hence consumers will slow their consumption
@yuvarajyuvi9691
Жыл бұрын
@@AsliEngineering For transaction queries we will be hitting the db server directly right as they need to be synchronous? You said there won't be any messaging queue involved in such cases.Crct me if I am wrong
@yuvarajyuvi9691
Жыл бұрын
@@imdsk28 Is it true for the RDS as well?? I don't think soo but not sure
@imdsk28
Жыл бұрын
@@yuvarajyuvi9691 not completely sure… need to deep dive
One question: If they are using SQS with DLQ(100% SLA guaranteed by AWS) in data ingestion, what could be the reason of using Kafka in the first place? Why can't they just use SQS(with DLQ) only?
@Bluesky-rn1mc
Жыл бұрын
could be due to cost. kafka is open source.
@imdsk28
Жыл бұрын
I think Kafka maintains order when compared to queue and only when Kafka is down we will be using SQS… SQS doesn’t maintain order that’s why we have two edge cases to handle upserts and updates with fresh time stamps
@AsliEngineering
Жыл бұрын
Sqs maintains order but Kafka provides higher write and read throughput.
@imdsk28
Жыл бұрын
@@AsliEngineering thanks for correcting… the architecture explanation is great…
@SaketAnandPage
2 ай бұрын
There are standard and fifo queue in sqs. Standard have high throughput. Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ. I think the explanation for what if SQS is down is not correct.
No practical application for demo ?
Thanks for the informative video Arpit. I have a doubt on handling out of order messages at the end of your video. While deciding upon update #1 to be processed first over update #2 using timestamp, how does the consumer know that the message is the oldest one over the other as consumer #1 may have update #1 and consume #2 may have update #2? I would think of versioning the data and make sure the data we update is the next available version of the one present in the analytical database. Is this approach correct?
@srinish1993
Жыл бұрын
I got the similar question in a recent interview, question: consider a order mgmt system, with multiple instances same order service responsible for handling updates of orders(on a MySQL db). now three updates arrive in a sequence at t1 Now how do we ensure the updates u1, U2 and u3 are applied in the same sequential order. any thoughts
@javeedbasha6088
Жыл бұрын
@@srinish1993 maybe can we send the timestamp along with the data? so that when it is processed by the consumer, it can create SQL query to update the data WHERE updated_at < data.timestamp and id=data.order_id.
@prajwalsingh7712
Жыл бұрын
@@javeedbasha6088 yeah correct, usually its a good practice to send event_timestamp field in the kafka msgs, to decide the order of msgs.
@javeedbasha6088
Жыл бұрын
After some reading, I discovered that updating data based on timestamps is not a reliable method since there are inconsistencies in the system clock. A more effective approach is to utilize end-to-end partitioning. In this method, all messages related to a specific partition key are written to the same kafka partition. As long as the producer sends these messages in order to kafka, kafka will maintain their order within the partition, although ordering is not maintained across different partitions. This specific partition is then consumed by only a single consumer instance, ensuring that related events are process by the same consumer. for example, suppose we have two messages: t1( create profile 'A' ), t2( update profile 'A' ) The same consumer will receive and process t1 and t2 sequentially. This approach ensures that order is maintained in event-driven architecture. And this approach can also handle concurrency.
@ankuragarwal9712
9 ай бұрын
@@javeedbasha6088 What about comparing version of the order document ? Because if we keep version then we can avoid the need of kafka , simple SQS will work
The analytics DB can be a write only, right?
@AsliEngineering
Жыл бұрын
What's the use of write if we never read?
Why SQS and kafka both ? Why can't only SQS with DLQ for high availability ?
@AsliEngineering
5 ай бұрын
Because the usecase was of a message stream and not a message queue. Hence Kafka being preferred. SQS is just a fall back to ensure no loss of events.
What if we use CDC (Dynamo Streams) to achieve the same?
@ankuragarwal9712
9 ай бұрын
instead of order service pushing the data to kafka?
There are standard and fifo queue in sqs. Standard have high throughput. Also DLQ is not for the purpose of fallback of Primary queue, instead that is for the consumer that if message fails to successfully consumed X number of times. Then it should be moved to DLQ. I think the explanation for what if SQS is down is not correct.
@swaroopas5207
2 ай бұрын
Yes ur crt, if consumer is unable consume even after retries, then it moves it to DLQ.
Yo. What if instead of timestamp difference, we just use versioning + counters ?
@AsliEngineering
Жыл бұрын
maintaining version / vector clocks is a pain and in most cases an overkill. TS works well for the majority of workloads.
One question on the last part: If there are two updates in out of order, then you mentioned that we can use "updatedTimestamp" to discard the past updates. But what should we do in the following scenario: Update 1 (new timestamp): changes field1 to some value changes field2 to some value Update 2 (old timestamp): changes field1 to some value changes field3 to some value In this scenario, discarding "Update 2" would be a wrong thing to do, right? Because, then we loose the data update made in "field3".
@AsliEngineering
Жыл бұрын
The replication setup is Row based replication and not statement based.
@itsrahulraj
Жыл бұрын
What I meant to say is: Update 1 (new timestamp): Update the fields: "field1" & "field2" of "row1" Update 2 (old timestamp): Update the fields: "field1" & "field3" of "row1" So, both updates try to update the same row -> "row1". Now, since these updates have multiple field updates, how should we handle it? Since we don't want to miss the "field3" update. Am I missing something here? Thanks for your time clarifying my doubts :)
@AsliEngineering
Жыл бұрын
@@itsrahulraj DB Transactions solve exactly this.
@itsrahulraj
Жыл бұрын
@Asli Engineering by Arpit Bhayani Thanks , it makes sense.
@varshakancham5944
Жыл бұрын
Can you please elaborate? @AsliEngineering ?I didn't get how Db Transactions will handle this.
a = "Guru" b = "Guru" c = input() -> Guru c ki id different Q🤔 (a aur b Ki Id Same hai) guru.capitalize() -> Guru # is also different id jab functions use kar rahe hai to different Ids q bana rahe ....... Already allocated Huve adress pe point q nahi kara raha hai🙄
Generally, I spend my weekends for learning something new and your content helped me a lot, thanks arpit sir 🫡 🔥 I was totally amazed by the Transaction DB part 😎