Thursday, June 19, 2025

Microservice -SAGA,CQRS

 1. SAGA

The Saga pattern is a design pattern used in microservices to manage data consistency across distributed transactionsIt breaks down a long-running transaction into a series of smaller, local transactions, each handled by a different microservice. If one of these local transactions fails, compensating transactions are executed to undo the changes made by the preceding successful transactions, ensuring data consistency. 
Key Concepts:
  • Local Transactions:
    Each microservice in the saga performs its operation as a local transaction, updating its own database. 
  • Eventual Consistency:
    The saga pattern aims for eventual consistency, meaning that while data might not be immediately consistent across all services, it will eventually reach a consistent state through compensating transactions. 
  • Choreography and Orchestration:
    Sagas can be implemented using two main approaches: 
    • Choreography: Services communicate directly through events, each service reacting to events from others. 
    • Orchestration: A central coordinator (orchestrator) manages the flow of the saga, instructing each service on what to do. 
  • Compensating Transactions:
    These are transactions that reverse the actions of previous transactions. If a transaction fails, compensating transactions are executed to undo the effects of the successful transactions. 
How it Works:
  1. 1. Initiation:
    A saga is initiated, and the first local transaction is executed. 
  2. 2. Event Publication:
    After each local transaction, a message or event is published to indicate completion or failure. 
  3. 3. Subsequent Transactions:
    Other microservices listen for these events and, based on the event, trigger their own local transactions. 
  4. 4. Failure Handling:
    If a local transaction fails, the saga will execute compensating transactions to undo the changes made by the preceding transactions. 
  5. 5. Saga Log:
    A log (Saga Log) is often used to track the state of each transaction and the compensating transactions needed. 
Example:
Consider an order processing saga:
  1. Order Service: Creates a new order (local transaction) and publishes an "Order Created" event. 
  2. Payment Service: Listens for the "Order Created" event and attempts to process the payment (local transaction). 
    • If payment is successful, it publishes a "Payment Successful" event. 
    • If payment fails, it publishes a "Payment Failed" event, and the Order Service will execute a compensating transaction to cancel the order. 
  3. Inventory Service: Listens for the "Payment Successful" event and updates the inventory. 
  4. Shipping Service: Listens for the "Inventory Updated" event and ships the order. 
Benefits:
  • Improved Scalability and Performance:
    By breaking down transactions, the saga pattern avoids the performance bottlenecks of traditional distributed transactions like two-phase commit. 
  • Increased Resilience:
    If a microservice is unavailable, the saga pattern can still make progress using compensating transactions, improving system resilience. 
  • Flexibility and Maintainability:
    The saga pattern allows for more flexible and maintainable microservices by decoupling them and allowing them to operate independently. 
Drawbacks:
  • Complexity: Sagas can be complex to implement, especially with many microservices and complex workflows. 
  • Debugging: Debugging sagas can be challenging due to their distributed nature. 
  • Eventual Consistency: The saga pattern relies on eventual consistency, which might not be suitable for all use cases. 



2. CQRS

The Command Query Responsibility Segregation (CQRS) pattern is a valuable approach in microservice architectures, particularly when dealing with complex data models and high read/write loads. It separates the operations of reading data (queries) from writing data (commands) into distinct models, allowing for independent optimization and scaling of each sideThis separation enables better performance, scalability, and maintainability of microservices. 


Here's a breakdown of the CQRS pattern in the context of microservices:
Core Concept:
  • Commands:
    Represent actions that modify the system's state (e.g., creating, updating, or deleting data).
  • Queries:
    Represent requests for data that do not change the system's state (e.g., retrieving data). 
Benefits in Microservices:
  • Independent Scaling:
    Microservices can be scaled based on their specific needs. Read-heavy services can be scaled independently from write-heavy services. 
  • Optimized Performance:
    Read and write models can be optimized for their respective operations, potentially using different technologies or data structures. 
  • Improved Maintainability:
    The separation of concerns makes the code easier to understand, maintain, and evolve. 
  • Flexibility and Scalability:
    CQRS allows for more flexible and scalable designs, especially when dealing with complex business logic or high data volumes. 
  • Eventual Consistency:
    CQRS often pairs with Event Sourcing, where changes are recorded as a stream of events. This can lead to eventual consistency, as read models might be updated asynchronously based on these events. 
Example:
Consider a social media platform:
  • Write Operations (Commands): Creating a new post, updating a user's profile, deleting a comment. 
  • Read Operations (Queries): Retrieving a user's feed, fetching a user's profile information, listing comments on a post. 
By using CQRS, the read and write operations can be handled by separate microservices, potentially using different databases optimized for their respective tasks. For instance, the write service might use a relational database for transactions, while the read service might use a NoSQL database optimized for fast queries. 
CQRS and Event Sourcing:
CQRS is often implemented alongside Event Sourcing, where the state of the system is represented by a sequence of events rather than the current state. This allows for: 
  • Full Audit Trail:
    Every change to the system is recorded as an event, providing a complete history of the system's state. 
  • Time-Travel Debugging:
    The ability to reconstruct the system's state at any point in time by replaying the events. 
  • Flexibility in Data Projections:
    Read models can be built by subscribing to events and projecting them in different ways, allowing for various views of the data. 
In summary, CQRS is a powerful pattern for building robust and scalable microservices, particularly when dealing with complex data models and high read/write loads. It offers significant benefits in terms of performance, scalability, and maintainability, and is often combined with Event Sourcing to provide a comprehensive solution for managing data in microservice architectures. 


RateLimiting 

Rate limiting in .NET Core APIs can be effectively implemented using the built-in System.Threading.RateLimiting library and the Polly library. The System.Threading.RateLimiting library provides the core rate limiting functionality, while Polly can be used to enhance resilience by adding features like retries and circuit breakers around the rate-limited operations. 
Here's a breakdown of how to use these libraries for rate limiting:
1. System.Threading.RateLimiting:
  • Install the NuGet package: Install-Package System.Threading.RateLimiting
  • Configure rate limiting policies:
    • Fixed Window: Limits requests within a fixed time window.
    • Sliding Window: Uses a moving time window to track requests.
    • Token Bucket: Emulates a bucket of tokens; requests consume tokens, and tokens are added at a fixed rate.
  • Apply policies to endpoints: Use middleware to enforce the rate limiting rules.
  • Handle rate limit exceeded: Return appropriate HTTP status codes (e.g., 429 Too Many Requests) and potentially provide information about the rate limit. 
Example (Fixed Window):
Code
using Microsoft.AspNetCore.Builder;using Microsoft.AspNetCore.RateLimiting;using Microsoft.Extensions.DependencyInjection;using System.Threading.RateLimiting;var builder = WebApplication.CreateBuilder(args);// Add services to the container.builder.Services.AddControllers();builder.Services.AddRateLimiter(options =>{    options.AddFixedWindowLimiter("fixed", options =>    {        options.PermitLimit = 10;        options.Window = TimeSpan.FromSeconds(10);        options.QueueProcessingOrder = QueueProcessingOrder.OldestFirst;        options.QueueLimit = 2;    });});var app = builder.Build();// Configure the HTTP request pipeline.app.UseRouting();app.UseRateLimiter();app.UseEndpoints(endpoints =>{    endpoints.MapControllers();});app.Run();
2. Polly:
  • Install the NuGet package: Install-Package Polly
  • Install the rate limiting extension: Install-Package Polly.RateLimiting
  • Use Polly's rate limiter: Wrap rate-limited operations with Polly's RateLimitPolicy.
  • Combine with other Polly strategies: Add retry logic, circuit breakers, etc. 
Example (Polly with Fixed Window):
Code
using Microsoft.AspNetCore.Builder;using Microsoft.AspNetCore.RateLimiting;using Microsoft.Extensions.DependencyInjection;using System.Threading.RateLimiting;using Polly;using Polly.Extensions.Http;using Polly.RateLimiting;var builder = WebApplication.CreateBuilder(args);// Add services to the container.builder.Services.AddControllers();builder.Services.AddRateLimiter(options =>{    options.AddFixedWindowLimiter("fixed", options =>    {        options.PermitLimit = 10;        options.Window = TimeSpan.FromSeconds(10);        options.QueueProcessingOrder = QueueProcessingOrder.OldestFirst;        options.QueueLimit = 2;    });});// Configure HttpClient with Pollybuilder.Services.AddHttpClient("myClient", client =>    {        client.BaseAddress = new Uri("https://api.example.com");    })    .AddPolicyHandler(Policy.RateLimit(            rateLimitPolicy: (context, _) => context.GetRateLimitPolicy("fixed"), // Get the fixed window rate limiter policy            permitCount: 1, // Request one permit            onRateLimited: (context, lease) =>            {                // Handle the rate limit exceeded event.  Maybe log, or return a custom response                return Task.CompletedTask;            })    );var app = builder.Build();// Configure the HTTP request pipeline.app.UseRouting();app.UseRateLimiter();app.UseEndpoints(endpoints =>{    endpoints.MapControllers();});app.Run();
Key considerations:

  • Partitioning: Rate limits can be partitioned based on client IP address, user identity, API keys, etc. 
  • Concurrency: Ensure proper handling of concurrent requests within the rate limit. 

Can Event pass as parameter 




























No comments:

Post a Comment