Google I/O 2009 - Transactions Across Datacenters..
Ғылым және технология
Google I/O 2009 - Transactions Across Datacenters (and Other Weekend Projects)
Ryan Barrett
-- Contents --
0:55 - Background quotes
2:30 - Introduction: multihoming for read/write structured storage
5:12 - Three types of consistency: weak, eventual, strong
10:00 - Transactions: definition, background
12:22 - Why multihome? Why try do anything across multiple datacenters?
15:30 - Why not multihome?
17:45 - Three kinds of multihoming: none, some, full
27:35 - Multihoming techniques and how to evaluate them
28:30 - Technique #1: Backups
31:39 - Technique #2: Master/slave replication
35:42 - Technique #3: Multi-master replication
39:30 - Technique #4: Two phase commit
43:53 - Technique #5: Paxos
49:35 - Conclusion: no silver bullet. Embrace the tradeoffs!
52:15 - Questions
-- End --
If you work on distributed systems, you try to design your system to keep running if any single machine fails. If you're ambitious, you might extend this to entire racks, or even more inconvenient sets of machines. However, what if your entire datacenter falls off the face of the earth? This talk will examine how current large scale storage systems handle fault tolerance and consistency, with a particular focus on the App Engine datastore. We'll cover techniques such as replication, sharding, two phase commit, and consensus protocols (e.g. Paxos), then explore how they can be applied across datacenters.
For presentation slides and all I/O sessions, please go to: code.google.com/events/io/sessions.html
Пікірлер: 24
Watching this in 2023 to prepare for System Design interviews
@tunganalokesh6329
2 ай бұрын
S..😂😂. But I forgot desiging any way all the best..
Super awesome presentation. Presenter is also so knowledgable and actually inspiring to step up, get out of shell and enjoy the architecting and solving distributed problems. Hats off.
Only Google complains about how slow the speed of light is...
needs to sort his sleeves out
I loved the discussion. very informative about datastore in particular and databases in general
That Paxos joke by Mike Burrows is proved wrong with Raft, right?
What about CRDT, ignore the eventual consistentacy
Can someone explain to me the queues that Ryan mentioned for the roundtrip? What queues are those?
@KeshawPandeyDEV
2 жыл бұрын
In simple terms when IP packets arrives at a router it has to decide which way it has to be sent. Sometimes the router gets more packets than it can process so it puts them in a queue (memory) and come back to it later. This add to the delay (in addition to the speed of light delay) in the arrival of the packet at the destination.
Anything changed in last 10 years? I guess no. Speed of light is same and causing the problem.
Nice 👍🏼
In backup technique why do we have weak consistency? we can read from logs (instead of datastore) and simulate complete behaviour.
@responsive_random
5 жыл бұрын
Then your logs become the slaves that can serve read operations. You are not backup anymore, instead, you are M/S architecture.
@freeman-uq8xr
3 жыл бұрын
Good Q and good A.
Paxos reminds me of blockchain in retrospect
@voidpointer398
Жыл бұрын
yeah concept of blockchain struck my mind as soon as I heard how paxos work
@kap1840
Жыл бұрын
Makes sense, Blockchain is a distributed decentrialized network. Consensus algorithms have many use cases, blockchain are just one of them
or is one all for how I answer going to repeatedly ask as if I were somehow a guantanamo bay prisoner...
watching on 2023 😅😅
27:33 How they did it (start)
i can't see the problem with writing the instruction to multiple data centers using parallel ajax. the code in the servers in each center can perform the instruction to keep them each in sync. even if you have 10 data centers then you could just pick 3 to read and write from (depending on the user's location) and remaining 7 will eventually get copies of the data. doesn't this get a solid green bar?
so few comments..
Thank you very much for this video. But I feel I might still keep using AWS for a while after I finished watching the question sections. You guys might be very smart and your tool/platform might be very amazing. But I feel It has not been done yet like aws.