BLAKE3 and bao deep dive

Ғылым және технология

00:00 BLAKE3 hash function
01:55 how BLAKE3 works
04:52 BLAKE3 chunking examples
08:28 bao verified streaming
11:13 bao inline & outboard
11:52 bao encoding examples
15:56 outboard encoding
17:47 slice encoding
20:48 chunk groups
23:58 size proofs
27:01 size proofs and chunk groups
28:23 append only data & keeping post-order traversal

Пікірлер: 8

  • @daniel2color
    @daniel2color Жыл бұрын

    Incredibly well explained! Thanks, Rüdiger 💡🙏 Compared to the chunking process of a typical UnixFS file, this seems much more elegant and efficient. Things I particularly liked: - The idea of just keeping the outboard encoding of the Merkle tree around as a separate file instead of UnixFS in a .CAR file takes less space. - Being able to tune the size of the Merkle tree with chunk groups that tradeoff computation for more efficient merkle tree size - Streaming verification! Looking forward to learning more about how it's integrated into Iroh.

  • @oconnor663
    @oconnor663 Жыл бұрын

    Fabulous talk! It makes me so happy to see Bao getting some real world use :)

  • @n0computer

    @n0computer

    Ай бұрын

    it is soooo good. Thank you for for your work on both bao AND BLAKE3!

  • @kickeddroid
    @kickeddroid Жыл бұрын

    Wonderfully explained, great job!

  • @ShawnMorel
    @ShawnMorel3 ай бұрын

    fantastic presentation. At 23mins, I think the tradeoff of chunk-groups isn't well explained. If the point of verified streaming is to verify the content, you'd be re-computing the chunk hashes regardless. The tradeoff seems to be that with chunk-groups, you need to wait to receive and verify n chunks before you can verify they're correct as opposed to being able to verify each 1024 chunk as it arrives

  • @markg5891

    @markg5891

    Ай бұрын

    +1 to this comment! I noticed that too. Chunk grouping + streaming is only "free" (as in fast to compute) if you have all the chunks in a given group. In some streaming situations (like for example just downloading a file) this might be perfectly sensible. However, for seeking in a file like in a movie, you'd need to have all the chunks within a group before you can verify them. So grouping is gonna give some bandwidth overhead here where "some" is increasingly bigger the larger the group gets. Tradeoffs i suppose ;)

  • @edbertkwesi4931
    @edbertkwesi493110 ай бұрын

    weh are you guys coming back miss your youtube reviews and meetings

  • @headshock1111
    @headshock111110 ай бұрын

    based and tree-pilled

Келесі