Event-Driven Architecture (EDA) vs Request/Response (RR)

Ғылым және технология

In this video, Adam Bellemare compares and contrasts Event-Driven and Request-Driven Architectures to give you a better idea of the tradeoffs and benefits involved with each.
To learn more about Event-Driven Architectures, check out Adam’s video on the 4 Key Types of EDA: • 4 Key Types of Event-D...
Many developers start in the synchronous request-response (RR) world, using REST and RPC to build inter-service communications. But tight service-to-service coupling, scalability, fan-out sensitivity, and data access issues can still remain.
In contrast, Event-Driven Architectures (EDA) provide a powerful way to decouple services in both time and space, letting you subscribe and react to the events that matter most for your line of business.
RELATED RESOURCES
► Tips to Help your Event-Driven Architecture Succeed - • How to Unlock the Powe...
► Introduction to Apache Kafka - cnfl.io/3Q7Pdn8
► Introduction to Kafka Connect - cnfl.io/3W24w4C
► Designing Events and Event Streams - cnfl.io/3vYIWU5
► Event Sourcing and Event Storage with Apache Kafka - cnfl.io/3JlXoso
► Designing Event-Driven Microservices - cnfl.io/444Kpoz
CHAPTERS
00:00 - Intro
01:04 - Reactivity
02:11 - Coupling in Space and Time
03:20 - Consistency
04:32 - Historical State
06:36 - Architectural Flexibility
09:09 - Data Access and Data Reuse
10:56 - Summary
-
ABOUT CONFLUENT
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion - designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.
#eventdrivenarchitecture #apachekafka #kafka #confluent #rest #rpc

Пікірлер: 62

  • @ConfluentDevXTeam
    @ConfluentDevXTeamАй бұрын

    Adam here. I took a crack at explaining some of the key differences between two popular interservice architectures. Let me know what you think, and if there are any other videos you'd like to see.

  • @vnadkarni

    @vnadkarni

    Ай бұрын

    Love it! Thank you for making this, Adam.

  • @raticus79

    @raticus79

    Ай бұрын

    Good work!

  • @Kostrytskyy

    @Kostrytskyy

    Ай бұрын

    Very clean, thank you!

  • @kfliden

    @kfliden

    29 күн бұрын

    Thanks, best illustration of the two different architectures I've seen so far!

  • @mario_luis_dev

    @mario_luis_dev

    28 күн бұрын

    you guys are making some high quality content here. Keep up the great work! 👏

  • @alizeynalov7395
    @alizeynalov739521 күн бұрын

    thank you, Adam. An amazing, short but very valuable video. Please continue enlighting us more. Also, I feel that this way of presenting is much better than not seeing the tutor, just the whiteboard. It makes me feel engaged.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    21 күн бұрын

    Hey, thanks for the feedback! I'm going to have a few more coming out in the next month or so, so stay tuned for more.

  • @iChrisBirch

    @iChrisBirch

    5 күн бұрын

    I agree, I like seeing someone talking to me and drawing as they are explaining, it's much more engaging.

  • @iChrisBirch
    @iChrisBirch5 күн бұрын

    Very well explained and the diagrams helped a lot. Great pacing, I didn't get lost in words and didn't feel like I need to play on 1.5x speed like a lot of videos. I liked the lecture style of this vs many 'content creators' that have visually beautiful videos with animations and graphics that in the end distract from the topic. Great job!

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    7 сағат бұрын

    Thanks Chris, I appreciate the kind words - I'm going to have a few more coming out next month, I hope they land well with you.

  • @Fikusiklol
    @FikusiklolАй бұрын

    Hello, Adam! I absolutely love and admire your effort (and other Confluent speakers) in making these very complex topics so easy to understand and grasp. Absolute best out there. Laconic an informative. Big thanks!

  • @adambellemare6136

    @adambellemare6136

    Ай бұрын

    Adam here. Thanks! I appreciate the kind words.

  • @dream_emulator
    @dream_emulator29 күн бұрын

    Amazing explanation! Thanks for this 👍😃

  • @gulhermepereira249
    @gulhermepereira2499 күн бұрын

    Hi, Adam. Thank you for the great explanation, but there's another important part missing : the cost. Could you please go over that for the following videos?

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    7 күн бұрын

    Adam here - that's a good request! I will see what I can do, but I think that it might end up being a better blog post than a video, mostly because of figures, tables, etc. I will think on what I can do, but thank you for your request.

  • @mr.daniish
    @mr.daniish29 күн бұрын

    Adam drops another knowledge bomb! Respect

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    29 күн бұрын

    🤓

  • @TheNoahHein
    @TheNoahHein20 күн бұрын

    Amazing video! Question about your setup, have you been teaching yourself to write backwards? My mind doesn’t quite wrap around how this video is filmed, it looks like the transparent “whiteboard” is in front, with you behind it writing.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    20 күн бұрын

    Hi, Adam here. I'm using what's called a "lightboard", which is effectively just a sheet of glass, lit by a strip of LEDs around the edge. The camera shoots through the board, recording everything backwards. In post-processing, we flip the video on its vertical axis, in effect "mirroring" the view. Notice how I am using my left hand to draw in the video - in reality, I was using my right hand! it's just when we flip it in post, it also flips me :) If you want a better description, try searching "lightboard" in youtube, I know there are several people who have done a "how does it work" type videos.

  • @reneschindhelm4482
    @reneschindhelm448215 күн бұрын

    Do you keep the entire history of events (1 create, N update, maybe 1 delete event) for each and every document/object/… in those topics? How does that affect storage/performance over time? Or is there some way to compress/discard past events, say f.e. by regularly creating snapshots of the state?

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    15 күн бұрын

    Adam here. Okay, great question. What you're describing is what I call "delta" events, where the events describe a difference. You've correctly identified that to get the current state, you'd have to apply all the events, in order - in other words, an event-sourcing style pattern. Over time the topic grows unbounded, based only on the quantity of delta events and not on the actual domain size. For example, if you have a billion items added to a shopping cart, and a billion items removed, then you'll have 2 billion events just for one cart. My recommendation is that you look at using event-carried state transfer (Warning, incoming advertisement). I wrote about them in a Confluent course that you may find helpful (developer.confluent.io/courses/event-design/fact-vs-delta-events/), back when I had a lot more hair. In terms of your observation of compressing and discarding the past - this is precisely what we would do with compaction for state/fact-based events. However, if you try to do it with deltas, it becomes a lot more challenging. For one, you need to generate a the state that you're going to store - basically exactly what a state/fact event already is! Two, you need to generate a snapshot of the current state that perfectly aligns with specific offsets in your Kafka topic. Three, you need to make sure that every client can access and read the snapshot, then switch over to the realtime - this is actually challenging to do when you're following a self-service architecture, and limits the technology choices your consumers can use. Four, you'll need to perfectly delete the data in the snapshot and the topic, such that there isn't an accidental overlap between the two. Actually very hard to do when you consider race conditions, new consumers, and atomicity. I can go on, but the gist is that it's a hard way to communicate state - you're much better off using the principles of event-carried state transfer, and state/fact type events.

  • @raghavshashtri1817
    @raghavshashtri181719 күн бұрын

    Hi Adam, I wanted to check what all are the ways to get the completion status in EDA from fulfilment store? I can think of polling only. Which I believe shouldn't be recommended. Could you suggest the best approach.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    17 күн бұрын

    Adam here. Couple options really, depending on your requirements. If you have a client (webpage) that needs to get immediate fulfillment status to present to the consumer, you could have the client connect to Fulfilment via say a REST call and simply ask. Your fulfillment service would need to provide support for this interface, of course, meaning it is both a RR and an EDA service (this is okay). A second option is that you emit events from Fulfillment whenever the status changes (eg: In Progress / Completed / Partially Completed). Other services could listen to the fulfillment results and make their own decisions. You could also sink the Fulfillment data to a simple key-value store (eg, DynamoDB), and use a simple service to provide on-demand RR answers to current Fulfillment status (minus a short latency for the event propagation). This pattern is helpful because it provides multiple services the stream of events to do their own work as they see fit. There is no "right" answer here. It depends on your customer needs. I _prefer_ to use the second option because it enables looser coupling, replayability, and all of the other things I mentioned in this video. I can also decouple serving business processing needs (making sure orders are fulfilled) from end-user customer availability needs (querying to see their statuses), such that one cannot interfere with the other. If fulfillment service crashes because of bugs in my code, I don't need to worry about DynamoDB & webserver failing to provide my customers with the last known status of their order.

  • @mohammadshweiki1511
    @mohammadshweiki151123 күн бұрын

    My question is about his setup, the board he is using to write in front of the camera is an Acrylic board correct? can anyone correct me if am wrong here? and what is the best marker to use I do deliver online training and consultation and I want to use the same method

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    23 күн бұрын

    Adam here. This is a homemade board that I made using modular aluminum framing, and initially I tried using a 6' x 4' sheet of acrylic. However, I found that my acrylic sheet would "fog up" if I turned the lights on too bright, and furthermore, it was very easy to scratch. I tried very hard to not scratch it, but within a week of light use I ended up with a few deep enough scratches that I couldn't hide them in post-processing. At that point, I switched to locally sourced low-lead glass (aka Starphire type), 1/4" thick. The sheet weighs about 55 lbs at a size of 64" x 44" (I made it smaller to match the actual 4K resolution), and I had to modify the frame to install some glass clamps. But it's much more durable, easier to clean, and clearer at higher brightness. I think I paid about $220 for the acrylic sheet, and $550 for the low-lead glass (both delivered). I would recommend skipping acrylic, as it can be hard to source a clear one and it just scratches too easily for sustained high-definition usage.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    23 күн бұрын

    Oh, and in terms of markers, I found that "EXPO NEON Marker Dry-Eraser" worked best in terms of contrast and visibility. I've also tried some liquid-chalk markers to mixed results, some show up well, some do not.

  • @davaigo2170
    @davaigo2170Ай бұрын

    For EDA, do we need to use CDC technology for it?

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    Ай бұрын

    Adam here. No, you don't need to, though it is a good way to get started by bootstrapping data into your Kafka broker. Some applications, such as one made with Kafka Streams, Flink, or FlinkSQL, can natively emit their own events to Kafka - events as input, and events as output. But if you're starting with absolutely no event-driven applications or event streams at all, then learning how to use CDC (such as Debezium) is a very good place to start.

  • @FecayDuncan
    @FecayDuncanАй бұрын

    I prefer an orchestrating ordering process that triggers events for underlying services to act on. These services will obtain the necessary data by making API calls to other services. Highly flexible through the use of process driven approach. Decoupled through event driven services. Consistency through well defined APIs.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    Ай бұрын

    Adam here. If you're using events as triggers to make API calls to other services, you lose replayability of the events. I've seen services work this way, and it's fine if you don't care about history or about high load on your servers hosting the API calls.

  • @FecayDuncan

    @FecayDuncan

    Ай бұрын

    @@ConfluentDevXTeam my orchestrator can replay itself based on history and each event driven service can scale up horizontally to consume more work.

  • @Fikusiklol

    @Fikusiklol

    Ай бұрын

    Why would you orchestrate some business process based on events (I assume), if services still making sync calls to other API's? That feels like orchestrated choreograhy. Data and temporal coupling are still there. Could you please explain underlying reason to do so?

  • @kohlimhg

    @kohlimhg

    Ай бұрын

    There are sometimes cases where the full data a service needs to process an event would be too large for e.g. a Kafka message. In that special case the service could obtain additional data via a synchronous call to another API. If the data provided by the API are immutable then replayability won't be lost.

  • @adambellemare6136

    @adambellemare6136

    Ай бұрын

    ​@@kohlimhg Yep, we also call it "claim cheque/check" pattern. The complexity with this pattern is stitching together the permissions and access controls between the Kafka record and the system that you present the claim check to. One good trick is to put all the work in the serializer and deserializer, such that it's transparent to the consumer.

  • @lamintouray7333
    @lamintouray733322 күн бұрын

    I am a student Software engineering and I found this quite interesting. Is there any academic/research paper out there that discuss this topic in details you could point out peharps. Thank you.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    21 күн бұрын

    Adam here. I don't know of any whitepaper for this subject, particularly as this is really a "versus" type of discussion. You may find some luck searching for one or the other terms (EDA, or RR) on its own. Otherwise, I'd just be googling this for you. Lots of this is stuff I've picked up in my own experience over the years, listening to others, etc.

  • @yasirnawaz2798
    @yasirnawaz27987 күн бұрын

    Just awesome!

  • @spreadpeaceofmindful
    @spreadpeaceofmindful24 күн бұрын

    In the EDA, 1)How do we verify the transaction got verified? (In an event where subscribers lose an event) 2) In a micro services scenario where multiple nodes are running, how do we prevent duplicates? (how to stop Processing the same order twice)

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    24 күн бұрын

    Adam here. 1) If you're talking about distributed transactions, you're going to need to look at the saga pattern. The subscribers won't "lose" an event unless they have written buggy code, in which case it's their responsibility to fix it. 2) One option is to check out "effectively/exactly-once processing" in Apache Kafka. It'll ensure your system doesn't create duplicate events, regardless of if or when it fails in its processing. However, it's still possible to cause side effects (like calling an external REST API) each time you process the event, despite exactly-once. However, this is the same as if you were to call a REST API, get a timeout, and retry calling it - resulting in duplicate processing. My advice is to make your code idempotent so that duplicate processing has no side effects. If you can't do that, then you're going to have to build a system to guard your consumer against duplicate processing, such as consuming records 1 by 1 (slow) or using durable state to keep track of precisely which records have been processed and which have not via atomic updates.

  • @SonAyoD
    @SonAyoD28 күн бұрын

    How would you pick one over the other? What are the use cases.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    28 күн бұрын

    Adam here. There are many ways you _can_ solve problems, which means that everyone has a different opinion on the matter. Given that I won't be able to explore this all in a youtube comment, we can start with the certainties. You'll be very safe using Request-Response and RPC-style communication whenever you need client-server communication. You'll also be safe if you use event-driven architecture to drive services that don't require blocking low-latency calls, like managing real-product inventory, handling advertising campaigns, and processing orders and payments. The whole middle ground between these two is where things can get muddy, where you'll get different answers depending on who you ask, and ultimately you'll end up with "it depends". What I like about EDAs is that if you invest a bit of time and effort into building well-defined event-streams, then you unlock the possibility to choose whichever is best for the task at hand - RR OR EDA.

  • @SonAyoD

    @SonAyoD

    28 күн бұрын

    @@ConfluentDevXTeam thanks for the explanation!

  • @AlanGramont
    @AlanGramont2 күн бұрын

    Your storefront probably should NOT be rewriting order changes that have reached complete. They should create a modification record. The view of the order will be a merged view of the original record and all modified records. In a document database, this is one collection showing the "current" order representing the merge and a table of changes over time. The changes can be differences but it also could just be the complete order as a second record with a version. In this way, the storefront can always provide order history without needing to pull it from external sources.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    2 күн бұрын

    Adam here. Some different philosophies to unpack. One option is to have multiple topics, one with each status. Another is to have multiple statuses/modification records (different schemas in the same topic). In both cases, we put the work on the consumers reconcile the data.(seemingly what you recommend with a modification record). The difficult part is that the consumers must then know about each of these topics and event types, and be prepared to reconcile them without making any interpretation, ordering, or logic mistakes. It results in a very tightly coupled workflow (think Event Sourcing). I personally advise against this methodology as saving a few bytes over the wire isn't worth the extra complexity for most use cases. A second option is to produce the record with the updated status from the Storefront service (single writer principle - Storefront owns the topic, it publishes the updates). However, storefront must then manage the lifecycle of the order through the entire system, which is more of an orchestrational responsibility, and less with its main purpose of taking orders from customers. A third option is to build a purpose-built orchestrator to manage the entire order fulfillment workflow. Storefront emits the initial order to this orchestrator , and then is done. Subsequent changes to the order are managed by the orchestrator. This is beyond the scope of a youtube comment, but I wanted to include it for clarity. A fourth option is to extend the third option with multiple orchestrators for separate parts of the fulfillment workflow, while also relying on loose choreography between major workflow chunks. This option tends to be what many mature event-driven organizations end up with - orchestration for critical workflows (And sub workflows), and choreography for less critical and looser-coupled systems. Again, beyond the youtube comment scope. I've gone into the State vs. Delta subject in my Confluent hosted course - but you can find the KZread video here if you're so interested: kzread.info/dash/bejne/qI5mz7mFese4kaQ.html

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    2 күн бұрын

    Oops one more thing - The nice thing about Kafka is that we can decide what to keep and what to compact away with State events. So for example, we may decide to keep all Orders for 1 year uncompacted. Events older than 1 year get compacted so that only the final status remains. For operational use-cases, we'd have to decide how much history we care about in the stream. For analytical purposes, we can just materialize everything as-is to an append-only Iceberg table. Plug in your own columnar processor for query, and you have full access to every state transition of your order for all of time.

  • @qapplor
    @qapplor19 күн бұрын

    I don't understand the need for Kafka here, the storefront could easily keep a history of order data to use for inventory, data lakes, ML etc. etc.? Also, depending on how the request-response's model's architecture is planned, it could work with "events" as well. Just don't design it to require an immediate response, but rather poll for a list of the fulfilled orders regularly and update its "order status" attribute. If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right? In the EDA example, the storefront would at some point need to display the fulfilled order, so it still needs to consume "responses" from the fulfillment, just it's asking Kafka instead the fulfillment service. You still need to define a data structure for the event and hope all your future application will be able to consume it, it's still a hard contract. Isn't it true you could create an asynchronous req-res Application? The immediate need for a response seems contrived and a beginner's mistake, frankly.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    17 күн бұрын

    FWIW, you are unlikely to run your data lake off a single operational database. Shopify (where I worked previously) had several hundred very large sharded databases to power their entire storefront experience for operations. Data lakes were a whole other story, and there was no way that we could process the queries we needed given the production setup. If all of your data can fit in a single DB and you can do your analytics in there as well, then go for it - I suggest only adding complexity as required, keep it simple where possible. > In the EDA example, the storefront would at some point need to display the fulfilled order, Storefront is not required to display the fulfilled order. There are many ways to communicate this information back to the client, such as using a modular frontend with different services powering different aspects of the UI. > Isn't it true you could create an asynchronous req-res Application? Yes. There are many ways to build services, and your answer is always going to be "it depends". > If there is a clear boundary between req-res and EDA, I still can't see it. All depends on how it's implemented, right? Event-driven architectures, such as those provided by Kafka, enable producers and consumers to decouple via the event stream / Kafka topic. Multiple consumers can use that data as they see fit for their own purposes. Services complete the work at their own rate, and can independently scale up and down depending on needs. If a service dies, it can resume from where it left off in the topic. In contrast with RR, each service must request and respond data. In flight requests are lost when services die. There is no common log to source data from, nor a history to see what happened over time. Latency can be lower, and yes you can also asynchronously process RR, but it's hard to get into the nuances in a short lightboard video.

  • @jgrote
    @jgrote8 күн бұрын

    I was honestly wondering how you learned how to write backwards so effectively until I realized you just flipped the video...

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    7 күн бұрын

    Adam here. Yep, honestly, the first lightboard video I saw I thought the same thing. :)

  • @azizsafudin
    @azizsafudin29 күн бұрын

    Fulfilment is spelled without two Ls

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    29 күн бұрын

    Adam here! I'm a Canadian, and I always spell it as "Fulfilment" (British/AUS/CAD/NZ Spelling) in my personal life. But my editors insisted I use "Fulfillment" (US English). I did more than a few takes where I had to stop and rewind before I decided to just start with the word written on the board.

  • @vanelord
    @vanelord16 күн бұрын

    Man drawing boxes around single point of failure (kafka) and saying it's loosely coupled. What's next ? Cloud as decentralized service ? I think I just watched a very long product advertisement. Better go learn some actor model and read about Carl Hewitt work instead of watching this brain wash.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    16 күн бұрын

    Adam here. Loose coupling pertains to the producer and consumer being decoupled in time and space. Introducing Kafka into the equation provides the ability for producers and consumers to keep working, even if the former or the latter fail. While it is true that Kafka can fail as well, it is a resilient, distributed, and fault-tolerant system. If set up properly, you would need to experience multiple instance and/or AZ failures to get to a point where the service is not operational. Most people running their own Kafka in the cloud will run a multi-AZ distribution, and rely on the guarantees provided by AWS/GCP/Azure/Oracle/Etc. And while I know you didn't like the video, Carl Hewitt is indeed an impressive thinker and was ahead of his time.

  • @vanelord

    @vanelord

    15 күн бұрын

    ​ @ConfluentDevXTeam Sorry for being harsh. I first watched video got frustrated wrote comment and then looked at the author and I thought to remove this comment but I left it because it have a point. So I want to explain myself. I don't like video because it is very chaotic and says nothing about impact of queue message sizes, message consistency for single topic but only complains about API consistency and presents RPC like technology from 90s. Watching this video I feel like I got back in time and servers are still using thread pool instead of event loop, everything is synchronous and it's using WSDL and SOAP and queue is the answer to all the problems, whether it's not. The presentation of queue adventages are very chaotic. Especially if you're presenting queue so unique like kafka that have message retention and message order consistency. Because people can ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete. For me the presentation should start with DAG single node RPC and two edges with same message and then author should say that if you want this messages be processed multiple times or if this message needs to be reprocessed (draw the edge going back to same node) ? We have an answer for it - kafka - the queue with retention and guaranteed order. You don't have to mangle your RPC business logic anymore. Don't worry about performance because kafka is battle tested by linkedin where it was developed in the first place. On the other hand if you need a sequence of things happening with single message and you have performance problems, if your messages are very big maybe it's better to use for example serverless soultions or maybe DAG processing frameworks like Airflow because if you put everything into queue or everything in RPC you end up with the same problems but in different environment. It should be clearly stated that data design and understanding of data flow is more important than underlying architecture and business logic. Because everything is just a wrapper around data. If you don't understand where is your data coming from and where it's going don't pick a solution.

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    15 күн бұрын

    > ask themselves why not just use ZMQ or any other MQ or NATS or just use websocket and graphql because author says REST is obsolete. I don't discuss the others because the movie would balloon in size. There's a lot of tech out there, this is just one way to do it. Also, REST is certainly not obsolete, I hope that was not your takeaway.

  • @user-ng8wh8to5o
    @user-ng8wh8to5o20 күн бұрын

    Event driven architecture is a headache for developers, it has a lot of pitfalls and i recommend never do it

  • @ConfluentDevXTeam

    @ConfluentDevXTeam

    20 күн бұрын

    Adam here. I'm sorry to hear you haven't had a good experience with EDAs, but I encourage you not to discard it as a tool from your kit. My experience has been quite the opposite, where developers love it and embrace it once they understand where to use it and what it's best suited for.

  • @arseniotedra4573
    @arseniotedra457311 күн бұрын

    #iamAmillionaire#arseniotedra#aimglobal ❤ thanks so much ❤

Келесі