Confluent

Confluent

Confluent, founded by the creators of Apache Kafka®, enables organizations to harness business value of live data. The Confluent Platform manages the barrage of stream data and makes it available throughout an organization. It provides various industries, from retail, logistics and manufacturing, to financial services and online social networking, a scalable, unified, real-time data pipeline that enables applications ranging from large volume data integration to big data analysis with Hadoop to real-time stream processing. Backed by Benchmark, Data Collective, Index Ventures and LinkedIn, Confluent is based in Mountain View, California. To learn more, please visit www.confluent.io.

Пікірлер

  • @jairajsahgal7101
    @jairajsahgal710113 сағат бұрын

    Thank you

  • @marcialabrahantes3369
    @marcialabrahantes336917 сағат бұрын

    Curious what did you imply by "protobuf is not human readable like JSON" in the context of renaming a field ? Per my experience any renames in Protobufs will be a breaking change as all your clients will need to be updated. so most changes are made to be backwards compatible by what you mentioned (new field, existing field gets ignored going forward). So I'm not sure what is the contrasting argument.

  • @rvb3939
    @rvb393922 сағат бұрын

    Hi Wade, great job! Thank you for the high-quality video and explanation. I believe you've nailed this topic and use-case. Can't wait to see your next videos. Cheers, Roberto

  • @QuanVuHongVN
    @QuanVuHongVNКүн бұрын

    Any diffrence between Apache Flink Watermark vs Apache Beam Watermark mechanism, or they are the same ?

  • @sorvex9
    @sorvex9Күн бұрын

    Stop making so many god damn services dude, maybe then your life would be easier

  • @JitenPalaparthi
    @JitenPalaparthi2 күн бұрын

    What is the device we can write like that?.. is it just a cam on or

  • @ConfluentDevXTeam
    @ConfluentDevXTeam19 сағат бұрын

    A lightboard!

  • @davidk7212
    @davidk72122 күн бұрын

    You should do a show called "Burglin' with Tim Berglund", where you discuss the tools and tactics used by modern burglars to successfully commit burglaries.

  • @user-dj4rw3ck7v
    @user-dj4rw3ck7v2 күн бұрын

    Great work

  • @flosrv3194
    @flosrv31944 күн бұрын

    no way to make what you do, sir. I would like to see one day one tutorial where nothing is hidden and all is shown clearly. Well, i have time to die three times before it happens... { "name": "ImportError", "message": "cannot import name 'config' from 'config' (c:\\Users\\flosr\\Engineering\\Data Engineering\\KZread API Project\\config.py)", "stack": "--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[7], line 3 1 import logging 2 import sys, requests ----> 3 from config import config ImportError: cannot import name 'config' from 'config' (c:\\Users\\flosr\\Engineering\\Data Engineering\\KZread API Project\\config.py)" }

  • @abdirahmanburyar
    @abdirahmanburyar4 күн бұрын

    that was great and interesting topic and glad to have it.

  • @awkomo
    @awkomo4 күн бұрын

    What is in the configuration file "ca.cnf"?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam4 күн бұрын

    Here is a link to the GItHub repo that goes along with the course. It has a sample ca.cnf, but if you are using Kafka in a production environment you'll want to use a trusted certificate authority rather than the self-signed certificate that was used in this course. github.com/confluentinc/learn-kafka-courses/blob/main/fund-kafka-security/ca.cnf You can find more instructions on how things are set up by following along with the GitHub repo and the guide that goes along with this video: developer.confluent.io/courses/security/hands-on-setting-up-encryption/

  • @desmontandolaweb
    @desmontandolaweb4 күн бұрын

    How and where can I get that shirt???

  • @vermoidvermoid7124
    @vermoidvermoid71244 күн бұрын

    Does the key need to be partition key?

  • @mattpopovich
    @mattpopovich5 күн бұрын

    Apache Kafka 101: Introduction (2023) . . Date posted: Nov 23, 2020

  • @fb-gu2er
    @fb-gu2er5 күн бұрын

    Durability is loosely defined. A durable record doesn’t disappear after you read it at some point

  • @KexiHuang
    @KexiHuang5 күн бұрын

    Clear explanation on the code design thinkings, as well as the step-by-step instructions on how to write the code, how to run it and how to check everything is going on the right way.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam5 күн бұрын

    Wade here. Glad you enjoyed the video.

  • @user-ev9jg6ts6e
    @user-ev9jg6ts6e6 күн бұрын

    In my opinion coupling is the root of all evil and it can be reduced by building the right logical boundaries across modules. And if those boundaries are built modules can be easily moved to separate microservices when necessary.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. Totally agree. Coupling sucks. To me, event-driven microservices are all about reducing coupling. It's all about finding ways to eliminate coupling in the database, coupling in the code, coupling in the APIs etc. And yes, you can absolutely do that inside of a monolith (or at least a lot of it), but Microservices kind of force you to think that way. I like when the architecture mirrors my goals.

  • @user-ev9jg6ts6e
    @user-ev9jg6ts6e6 күн бұрын

    ​@@ConfluentDevXTeam Indeed, microservices force you to think that way. Well said!.

  • @eugene5096
    @eugene50966 күн бұрын

    Hey Wade, thanks for a wonderfull video. As a developer if i would start a new project these days i see vertical sliced architecture with endpoints for each slice as a very good candate. Its monolith in a begining but you can decouple it in any point of time. What you think ?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. Some very smart people recommend starting with a monolith and evolving to microservices later. I think that's a good approach, if you have the discipline to manage it. The problem is that often, by the time you reach the point where you decide you want to build microservices, you've had so many different developers working on the project and it's become such a mess, that it is very difficult to extract microservices from it. If I were a startup, building a pure greenfield project, I might consider a monolith. I'd be dealing with a really small team at that point, and limited domain knowledge. A monolith might make sense at that scale. But if I was starting a project inside of a company with any reasonable level of maturity, I'd absolutely look at building microservices.

  • @raptorate2872
    @raptorate28726 күн бұрын

    guys i will save you time, they are not. As much as possible avoid and use simpler solutions, use only when needed or required. Keep solutions lean. The more you know

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. I'd have to respectfully disagree. You said yourself you should use them when needed or required. That implies that sometimes they are the right solution. The point of the video is to highlight exactly that. It's important to analyze your business requirements and decide whether microservices are the right solution. If they are, you should absolutely use them. If they are not, then clearly it would be a bad decision. But making that decision without first looking at your use case is where the mistake lies.

  • @raptorate2872
    @raptorate28726 күн бұрын

    @@ConfluentDevXTeam I would be inclined to disagree. There is no scenario where they are absolutely required. What I meant by required in my statement and it is my fault for using that is if you have no choice. You are capable of doing everything without them and still much of the worlds critical services do not use them. They address some of the pain points of monolithic and multi tier architectures as you pointed out. They are more of a convenience to get around the limitations of the old ways but they come with their own host of problems. The only question is if the tradeoffs are worth it. If not a problem go ahead, but if you can it's almost always better not to. It's just that modern teams want the easier and fastest solutions and don't want to spend time and resources on their old architectures and it is quite the resources. Microservices is more like an easy way out. Given enough resources, you can always find a way to not rely on them. In fact with time, you will notice that most companies and services that rely on microservices, managing them after a certain point is a nightmare and costs more to manage than to invest in alternatives. I must emphasize, there is no scenario where they are absolutely required. It's just a choice between convenience and resources. Even to this day, despite all the developments, well maintained monolithics are still running our most critical services from government, military, banking, medical, energy, etc Even for distributed systems with high concurrency, microservices still lag behind in benchmarks. Very few cases where they outperform.

  • @stonemeep4202
    @stonemeep42026 күн бұрын

    Love y'alls work, keep it up

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. Thanks for the feedback. Glad you are enjoying our videos.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. I've made a few changes to my video format with this entry. First, the content is focusing on a specific case study, rather than being more about general theory. And second, I learned to animate things. Our video editor also had a bit of fun with a few parts of the video. I'd love to know what you think of the changes. Do you like the focus on the case study? Do you think the animations help you stay focused on what I am saying? What do you think of the fun video edits? Let me know if you feel this video is more engaging than some of the others that I have done, or if you have suggestions for future videos.

  • @eugene5096
    @eugene50966 күн бұрын

    I think that jokes and real case studies with real problems are moving thouse type of video to different ( better ) level.

  • @AlanGramont
    @AlanGramont6 күн бұрын

    Your storefront probably should NOT be rewriting order changes that have reached complete. They should create a modification record. The view of the order will be a merged view of the original record and all modified records. In a document database, this is one collection showing the "current" order representing the merge and a table of changes over time. The changes can be differences but it also could just be the complete order as a second record with a version. In this way, the storefront can always provide order history without needing to pull it from external sources.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Adam here. Some different philosophies to unpack. One option is to have multiple topics, one with each status. Another is to have multiple statuses/modification records (different schemas in the same topic). In both cases, we put the work on the consumers reconcile the data.(seemingly what you recommend with a modification record). The difficult part is that the consumers must then know about each of these topics and event types, and be prepared to reconcile them without making any interpretation, ordering, or logic mistakes. It results in a very tightly coupled workflow (think Event Sourcing). I personally advise against this methodology as saving a few bytes over the wire isn't worth the extra complexity for most use cases. A second option is to produce the record with the updated status from the Storefront service (single writer principle - Storefront owns the topic, it publishes the updates). However, storefront must then manage the lifecycle of the order through the entire system, which is more of an orchestrational responsibility, and less with its main purpose of taking orders from customers. A third option is to build a purpose-built orchestrator to manage the entire order fulfillment workflow. Storefront emits the initial order to this orchestrator , and then is done. Subsequent changes to the order are managed by the orchestrator. This is beyond the scope of a youtube comment, but I wanted to include it for clarity. A fourth option is to extend the third option with multiple orchestrators for separate parts of the fulfillment workflow, while also relying on loose choreography between major workflow chunks. This option tends to be what many mature event-driven organizations end up with - orchestration for critical workflows (And sub workflows), and choreography for less critical and looser-coupled systems. Again, beyond the youtube comment scope. I've gone into the State vs. Delta subject in my Confluent hosted course - but you can find the KZread video here if you're so interested: kzread.info/dash/bejne/qI5mz7mFese4kaQ.html

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Oops one more thing - The nice thing about Kafka is that we can decide what to keep and what to compact away with State events. So for example, we may decide to keep all Orders for 1 year uncompacted. Events older than 1 year get compacted so that only the final status remains. For operational use-cases, we'd have to decide how much history we care about in the stream. For analytical purposes, we can just materialize everything as-is to an append-only Iceberg table. Plug in your own columnar processor for query, and you have full access to every state transition of your order for all of time.

  • @mhetresachin717
    @mhetresachin7176 күн бұрын

    Hi Wade, Thank you for the informative video. I have a few questions: 1. Why is this pattern necessary when we already have CDC and DB connectors? 2. If I'm manually handling the Kafka publishing by reading data from the outbox table and publishing it to Kafka, how should I manage the scenario where the Kafka publish is successful but deleting the entry from the outbox table fails?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. 1. You've made an assumption that you have CDC and DB connectors. What if you don't? Now, for the sake of argument, let's say you do. What is the CDC process emitting? In a traditional architecture, you don't save events to the database. You save database/domain objects. Your CDC connector could certainly emit every change to a particular table, but that's not actually the same thing as an event. The event itself could span many different records in the database and may contain information in a different format than how it is stored in the database. It might filter out information, or include extra information such as more detailed context about the change. Now, CDC + Outbox is a pretty handy combination. 2. If deleting the entry (or marking it complete) fails, then you have to retry the entire process. And yes, that means you will get a duplicate event. Duplicates are always a risk when you are dealing with at-least-once delivery guarantees (Note: the same risk exists if you use CDC).

  • @king_eziel
    @king_eziel6 күн бұрын

    How has this not gotten more attention? Great and concise.

  • @prakharsahu7145
    @prakharsahu71457 күн бұрын

    Thanks a lot.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. You are most welcome. I'm glad you enjoyed it.

  • @YO3ART
    @YO3ART8 күн бұрын

    A video on progress tracking problems would be great. Let's say event A triggers various events in different microservices. How can a microservice determine when all events that were produced in response to event A are fully processed? For example, if event A is a file upload, which triggers virus scanning, format conversion, and metadata extraction, how can we track when all related processes are complete? Some of those processes are optional or depend on context, and the total amount of processing to be done can be unpredictable. Additionally, there may be no sequential order for these processes. I also find progress hard to track if you can't use historic data and don't know the total amount of work that needs to be done. Is coupling and increasing complexity inevitable in such cases? Let's say one microservice consumes a FileProcessingQueued event which triggers a not easily predictable amount of other events, and another microservice expects a FileProcessingFinished event, while some other microservice may expect progress reports even before FileProcessingFinished is produced. This problem (or anti-pattern) deserves its own name and strategies for dealing with it.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. What you are talking about seems to be more about event-driven architecture than event-sourcing. However, I'll provide some information anyway. In general, I would say that emitting an event should be treated as fire and forget. You send the event and you don't worry about what happens to it downstream. Part of the goal of event-driven systems is to ensure that the various services are decoupled from each other. The producer of the events shouldn't even know that the consumers exist, much less whether or not they have done the work. Ideally, it wouldn't matter. Now, that's not always possible. So in cases where you are expecting some kind of result from the consumers, usually it would be communicated via more events. So when the downstream finishes processing, it would emit another event. The upstream can listen for the event. So in your example, when the virus scanning, format conversion, and metadata extraction all finish, each emits a separate event. Some service can then listen for those events and correlate them together (Correlation Ids can help here).

  • @YO3ART
    @YO3ART6 күн бұрын

    @@ConfluentDevXTeam I believe many devs may find it valuable. After more learning, I can see it may be related to choreography and orchestration. Having progress reporting or completion events in choreography seems impossible. Much like trying to predict Conway's Game of Life

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    @@YO3ART Wade here. I'd suggest taking a look at the Saga Pattern or Process Managers. While not directly related, they are techniques that are often used to coordinate multiple complex steps and could be adapted to work with events.

  • @iChrisBirch
    @iChrisBirch9 күн бұрын

    Very well explained and the diagrams helped a lot. Great pacing, I didn't get lost in words and didn't feel like I need to play on 1.5x speed like a lot of videos. I liked the lecture style of this vs many 'content creators' that have visually beautiful videos with animations and graphics that in the end distract from the topic. Great job!

  • @ConfluentDevXTeam
    @ConfluentDevXTeam4 күн бұрын

    Thanks Chris, I appreciate the kind words - I'm going to have a few more coming out next month, I hope they land well with you.

  • @madona3921
    @madona39219 күн бұрын

    did you run your application in eclipse by any chance?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam6 күн бұрын

    Wade here. No, the application was run directly from the terminal.

  • @SapiaCasim
    @SapiaCasim10 күн бұрын

    Wow amazing

  • @SapiaCasim
    @SapiaCasim10 күн бұрын

    Good morning

  • @yasirnawaz2798
    @yasirnawaz279811 күн бұрын

    Just awesome!

  • @utsavpanchal9299
    @utsavpanchal929912 күн бұрын

    Bro get this guy on stage!

  • @adambellemare6136
    @adambellemare613611 күн бұрын

    Best thing about this guy is that if you google his name you'll come up with tons of his content. Kris is great, he's got a great talent for explaining. He has his own video podcast series right now too!

  • @jgrote
    @jgrote13 күн бұрын

    I was honestly wondering how you learned how to write backwards so effectively until I realized you just flipped the video...

  • @ConfluentDevXTeam
    @ConfluentDevXTeam11 күн бұрын

    Adam here. Yep, honestly, the first lightboard video I saw I thought the same thing. :)

  • @AdamSouquieres
    @AdamSouquieres13 күн бұрын

    Funny and instructive. Please send more.🤣

  • @user-js4vr3pp9y
    @user-js4vr3pp9y13 күн бұрын

    Lovely Phil!

  • @WalkerCarlson
    @WalkerCarlson13 күн бұрын

    Matthias, I think that labcoat might be a size to large :)

  • @dus10dnd
    @dus10dnd10 күн бұрын

    I think maybe that is none of your business. Maybe he likes the size of the labcoat! :p

  • @WalkerCarlson
    @WalkerCarlson10 күн бұрын

    @@dus10dnd You know.... fair

  • @gulhermepereira249
    @gulhermepereira24913 күн бұрын

    Hi, Adam. Thank you for the great explanation, but there's another important part missing : the cost. Could you please go over that for the following videos?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam11 күн бұрын

    Adam here - that's a good request! I will see what I can do, but I think that it might end up being a better blog post than a video, mostly because of figures, tables, etc. I will think on what I can do, but thank you for your request.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam14 күн бұрын

    Have a question you want answered by the Duchess and the Doctor? Leave a comment below and subscribe to the Confluent KZread channel for updates on future episodes!

  • @v-w-dev5858
    @v-w-dev585815 күн бұрын

    What I think the video is just providing code sample, but not some practical scenario, or real case, not very useful

  • @arseniotedra4573
    @arseniotedra457315 күн бұрын

    #iamAmillionaire#arseniotedra#aimglobal ❤ thanks so much ❤

  • @ramananthiru9888
    @ramananthiru988815 күн бұрын

    OMG!! Great content writing !!

  • @ConfluentDevXTeam
    @ConfluentDevXTeam13 күн бұрын

    Wade here. I'm glad you enjoyed it. Hopefully you've checked out all of the videos in the series.

  • @martyrd0m
    @martyrd0m16 күн бұрын

    Why paint your finger nails. Are you a women!?

  • @eduardogpisco
    @eduardogpisco16 күн бұрын

    great video , great energy , very didactic. I really enjoy every minute of the video. Also the way you talk transmit really nice vibes

  • @kevinding0218
    @kevinding021817 күн бұрын

    Thanks a lot for the video. With the Listen to Yourself pattern, it implies that when our microservice performs a calculation or similar operation, the resulting data should be included in the event payload. This enables the service to rely on the data within the event payload to update its own database when it listens to itself. In this setup, I feel it's crucial to pay careful attention to the message bus configuration, particularly aspects like order delivery guarantees. Alternatively, employing event sourcing might be beneficial to provide a mechanism for reconciliation in case of discrepancies. For instance, if the process involves calculating a deposit check, merely having a snapshot of the calculation in the event payload might be insufficient.

  • @ConfluentDevXTeam
    @ConfluentDevXTeam13 күн бұрын

    Wade here. Depending on the domain, ordering might be critical. If it is, you are definitely going to want to pay attention to how you configure that. In Apache Kafka, you'd need to be careful of your partitions and partition keys to ensure you maintain the necessary ordering. Event sourcing is a good solution to the dual-write problem on its own. It provides many of the same benefits as the Listen to Yourself pattern. However, it's not likely to be as fast as the Listen to Yourself pattern when it comes to responding to the sender.

  • @kevinding0218
    @kevinding021817 күн бұрын

    Thanks a lot for the explanation of the Outbox Pattern; I think I grasp the workflow, but I'm trying to get a better understanding about what benefit it brings. For benefit of outbox pattern, compare to if the original microservice updates the state in the database first and, only upon success, proceeds to publish the event with a retry mechanism. I sense that a benefit of using the Outbox Pattern might be 1 1) Un-blocking the original service while having to let it hanging there perform the retry if event produce failed. 2) Record the event in a different place so it won't be lost if original service goes down after only persisting the state in DB 3) Isolate the event produce with at-least-delivery manner while having to respect eventual-consistency is there anything missed by using the Outbox Pattern?

  • @ConfluentDevXTeam
    @ConfluentDevXTeam13 күн бұрын

    Wade here. You suggest using a retry mechanism to publish the event. But how do you know what events need to be retried? You'd have to save them somewhere, like perhaps in the database, because if they are only in memory, they are prone to getting lost. You need to save them in a transactional fashion because otherwise, you encounter the dual-write problem. Essentially, you've now implemented the Transactional Outbox pattern while working on retry mechanism.

  • @this-is-bioman
    @this-is-bioman17 күн бұрын

    Well, it could have been an e-mail... so it's an event log. wow. and for this one needs 10 minutes of talking?

  • @Syscrush
    @Syscrush17 күн бұрын

    "STATE": "England"😆