Search  
Always will be ready notify the world about expectations as easy as possible: job change page
May 30

Turbocharging ASP.NET Core applications: A deep dive into performance optimizations

Turbocharging ASP.NET Core applications: A deep dive into performance optimizations
Author:
Source:
Views:
3382

Performance is paramount when developing web applications. A slow, unresponsive application results in poor user experience, losing users, and possibly business. For ASP.NET Core developers, there are many techniques and best practices to optimize application performance. Let’s explore some of these approaches in this article.

Understanding performance bottlenecks

When we talk about performance, the first thing to ask is “Where is the issue?” Without understanding where the bottlenecks are, we could end up optimizing parts of our application that won’t have a significant impact on overall performance.

Analyzing your application

There are many tools and techniques to identify performance bottlenecks in an ASP.NET Core application:

Logging and metrics

One of the simplest approaches is to add logging and metrics to your application. You can measure how long operations take and log any issues that occur.

ASP.NET Core supports a logging API that works with a variety of built-in and third-party logging providers. You can configure the built-in logging providers to output logs to the console, debug, and event tracing.

Here’s an example of how you can use the ILogger service to log the execution time of a method:

public class MyController : Controller
{
    private readonly ILogger _logger;
    public MyController(ILogger<MyController> logger)
    {
        _logger = logger;
    }

    public IActionResult Index()
    {
        var watch = Stopwatch.StartNew();
        
        // Code to measure goes here...
        
        watch.Stop();
        var elapsedMs = watch.ElapsedMilliseconds;
        _logger.LogInformation("Index method took {ElapsedMilliseconds}ms", elapsedMs);

        return View();
    }
}

Profiling

A more advanced way to identify performance bottlenecks is to use a profiler. A profiler is a tool that monitors the execution of an application, recording things like memory allocation, CPU usage, and other metrics.

There are many profilers available, including:

  • Visual Studio Performance Profiler: This built-in profiler in Visual Studio offers a suite of tools for collecting performance data in your applications. It can help you understand CPU usage, memory consumption, and thread contention in your application.
  • DotTrace: A .NET Performance Profiler from JetBrains, the makers of ReSharper and Rider. It offers a lot of advanced features for profiling the performance of .NET applications.
  • Prefix by Stackify: A lightweight, free profiler that provides real-time, code-level context for performance data, right in your development workflow.

Application Performance Management (APM) tools

Application Performance Management (APM) tools go a step further, providing in-depth, real-time insights into an application’s performance, availability, and user experience. APM tools can identify performance bottlenecks in real-world scenarios, not just in development and testing.

  • Application Insights: This is a feature of Azure Monitor, it’s an extensible Application Performance Management (APM) service for developers and DevOps professionals. It monitors your live application to detect performance anomalies and includes powerful analytics tools to diagnose issues and understand what users do with your app.
  • New Relic: A powerful APM tool that offers real-time monitoring, custom dashboards, alerting, and integrations with popular DevOps tools.

1. Use asynchronous programming

Understanding asynchronous programming

Asynchronous programming is a way to improve the overall throughput of your application on a single machine. It works by freeing up a thread while waiting for some IO-bound operation (such as a call to an external service or a database) to complete, rather than blocking the thread until the operation is done. When the operation is complete, the framework automatically assigns a thread to continue the execution.

The result is that your application can handle more requests with the same number of threads, as those threads can be used to serve other requests while waiting for IO-bound operations to complete.

Asynchronous programming in ASP.NET Core

ASP.NET Core is built from the ground up to support asynchronous programming. The framework and its underlying I/O libraries are asynchronous to provide maximum performance.

Here’s how you might write an asynchronous action in an ASP.NET Core controller:

public async Task<IActionResult> Get()
{
    var data = await _myService.GetDataAsync();
    return Ok(data);
}

In this example, GetDataAsync might be making a call to a database or an external service. By awaiting this method, the thread executing this action can be freed up to handle another request.

When to Use async programming

  1. I/O-bound operations: If your application is performing operations that are I/O-bound, such as network requests, database queries, or file reads and writes, these operations can often be performed asynchronously. This allows your application to continue doing other work while waiting for the I/O operation to complete, improving responsiveness.
  2. Scalability: If you’re building a server-side application, using async programming can help improve the scalability of your application. By freeing up threads to handle other requests while waiting for I/O operations to complete, you can increase the number of requests your application can handle concurrently.
  3. UI responsiveness: In client-side applications (like a WPF or WinForms app), async programming can help keep the user interface responsive. Long-running operations can be performed asynchronously to avoid blocking the UI thread, preventing the application from becoming unresponsive.

When not to use async programming

  1. CPU-bound operations: If your application is performing CPU-bound operations, such as complex calculations or data processing, making these operations async won’t provide any benefit and may even degrade performance. This is because the operation still needs to be performed by the CPU, and making it async just adds additional overhead.
  2. Simple, fast operations: If the operations your application is performing are simple and complete quickly, there may be little to no benefit to making them async. The overhead of switching contexts may outweigh the benefits.
  3. Sequential operations: If you have operations that need to be performed in a specific order, and each operation depends on the completion of the previous one, using async programming might complicate your code without providing much benefit. In such cases, it may be simpler and more efficient to perform the operations synchronously.
  4. Potential for deadlocks: In certain situations, using async can lead to deadlocks, especially if you’re working with legacy code or libraries that are not async-friendly. Always be cautious when mixing async and blocking code.

Things to remember when using asynchronous programming

  • Use async all the way down: Asynchronous is contagious. If you start using async, you’ll need to use it all the way down through your method calls. If you don’t, you’ll end up blocking threads and negating the benefits of async.
  • Return a Task: Asynchronous methods in .NET return a Task or a ValueTask. This represents the ongoing work and can be awaited.
  • Avoid async void: async void should only be used for event handlers. If you use async void in other cases, exceptions might be thrown that you can't catch, and the caller can't await the method's completion.
  • Understand the synchronization context: The synchronization context is a concept that ASP.NET Core doesn’t use by default (unlike previous versions of ASP.NET). This means you don’t need to capture the context when you’re switching back to the UI thread (as you might do in a desktop application). This makes writing async code in ASP.NET Core simpler.

Here’s an example of how you might use async in a service that calls Entity Framework Core:

public class MyService : IMyService
{
    private readonly MyDbContext _context;
    
    public MyService(MyDbContext context)
    {
        _context = context;
    }
    
    public async Task<MyData> GetDataAsync()
    {
        return await _context.MyData
            .OrderBy(d => d.Created)
            .FirstOrDefaultAsync();
    }
}

2. Caching

Caching is an effective way to boost the performance of your ASP.NET Core applications. The basic idea is simple: instead of executing a time-consuming operation (like a complex database query) every time you need the result, execute it once, cache the result, and then just retrieve the cached result whenever you need it.

ASP.NET Core provides several built-in ways to cache data:

2.1 In-Memory caching

In-memory caching is the simplest form of caching. It stores data in the memory of the web server. This makes accessing the cached data extremely fast.

In-memory caching in ASP.NET Core stores cache data in the memory of the web server. The data is stored as key-value pairs and can be any object. The access to the in-memory cache is extremely fast, making it an efficient way to store data that’s accessed frequently.

One thing to note about in-memory caching is that the cache data is not shared across multiple instances of the application. If you run your application on multiple servers, or if you use a process-per-request model, then the in-memory cache will be separate for each instance or process.

When to use In-Memory caching?

In-memory caching can be an effective way to improve the performance of your application in the following scenarios:

  1. Data is accessed frequently: This makes it a good candidate for caching because you avoid the overhead of retrieving the data from the original source every time.
  2. Data changes infrequently: If the data changes frequently, then the cached data will frequently be stale, and you’ll need to refresh the cache regularly. If the data changes infrequently, then the cached data is likely to be fresh most of the time.
  3. Data retrieval is expensive: If retrieving the data from the original source is time-consuming or resource-intensive, then caching can significantly improve performance by avoiding this overhead.

Here’s an example of how you might use in-memory caching in an ASP.NET Core controller:

public class MyController : Controller
{
    private IMemoryCache _cache;

    public MyController(IMemoryCache cache)
    {
        _cache = cache;
    }

    public IActionResult Index()
    {
        string cacheEntry;

        if (!_cache.TryGetValue("_MyKey", out cacheEntry)) // Look for cache key.
        {
            // Key not in cache, so get data.
            cacheEntry = GetMyData();

            // Set cache options.
            var cacheEntryOptions = new MemoryCacheEntryOptions()
                // Keep in cache for this time, reset time if accessed.
                .SetSlidingExpiration(TimeSpan.FromMinutes(2));

            // Save data in cache.
            _cache.Set("_MyKey", cacheEntry, cacheEntryOptions);
        }

        return View(cacheEntry);
    }
    
    private string GetMyData()
    {
        // Simulating a time-consuming operation
        Thread.Sleep(2000);
        return "Hello, world!";
    }
}

In this example, the GetMyData method simulates a time-consuming operation. This could be a complex database query, a call to an external service, or any operation that takes time to execute. By caching the result, we avoid the need to execute this operation every time the Index action is called.

2.2 Distributed caching

Distributed caching involves using a cache that’s shared by multiple instances of an application. ASP.NET Core supports several distributed cache stores, including SQL Server, Redis, and NCache.

When using a distributed cache, an instance of your application can read and write data to the cache. Other instances can then read this data from the cache, even if they’re running on different servers.

When to use distributed caching?

You should consider using distributed caching in the following scenarios:

  1. Web farm or cloud hosting environments: If your application is hosted in a web farm or a cloud hosting environment where multiple instances of the application are running on different servers, then you can use distributed caching to share cache data across all instances.
  2. Data is expensive to recreate: If the data you’re caching is expensive to recreate, and it’s accessed frequently, then it makes sense to cache it in a distributed cache so that it’s available to all instances of your application.
  3. High availability and durability: Some distributed cache stores like Redis offer replication and persistence features. This means your cache data can survive a restart of the cache service or even a complete server failure.

3. Response compression

When we talk about improving the performance of web applications, one area often overlooked is the size of the HTTP responses. Large responses take longer to transmit over the network, and this latency can have a significant impact on performance, especially for clients with slow network connections.

What is response compression?

Response compression is a simple and effective way to reduce the size of HTTP responses, thereby improving the performance of your application. It works by compressing the response data on the server before sending it to the client. The client then decompresses the data before processing it. This process is transparent to the end user.

The most common compression algorithms used for response compression are Gzip and Brotli. They can significantly reduce the size of responses, often by 70% or more.

Using response compression in ASP.NET Core

ASP.NET Core includes middleware for response compression. To enable it, you need to add the middleware to your Startup.ConfigureServices and Startup.Configure methods, like this:

public void ConfigureServices(IServiceCollection services)
{
    services.AddResponseCompression();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseResponseCompression();

    // Other middleware...
}

By default, the response compression middleware compresses responses for “compressible” MIME types (like text, JSON, and SVG). You can add additional MIME types if necessary.

You should also configure your server (like IIS, Kestrel, or HTTP.sys) to use dynamic compression. This ensures that your responses are compressed even if you’re not using the response compression middleware (for example, for static files).

Things to consider when using response compression

While response compression can improve performance, there are a few things to keep in mind:

  • CPU vs. Bandwidth Trade-off: Compression requires CPU time on the server (to compress the data) and on the client (to decompress the data). If your server is already CPU-bound, adding response compression could make performance worse, not better. Similarly, if your clients have powerful CPUs but slow network connections, then response compression can improve performance.
  • Secure content: Be aware of security implications when compressing secure (HTTPS) responses. Certain attacks, like the BREACH attack, can take advantage of compressed responses to infer information about the plaintext data.
  • Already compressed responses: Some responses (like JPEG images or certain types of PDF files) are already compressed, so trying to compress them again won’t reduce their size much, if at all, and could even make them larger.

4. Entity Framework Core performance

Entity Framework Core (EF Core) is a powerful Object-Relational Mapper (ORM) that simplifies data access in your .NET applications. However, if used without consideration for its performance behavior, you can end up with an inefficient application. Here are some techniques to improve the performance of your applications that use EF Core:

4.1 Lazy loading vs Eager loading

Lazy loading is a concept where the related data is only loaded from the database when it’s actually needed. On the other hand, Eager loading means that the related data is loaded from the database as part of the initial query.

While lazy loading can seem convenient, it can result in performance issues due to the N+1 problem, where the application executes an additional query for each entity retrieved. This can result in many round-trips to the database, which increases latency.

Eager loading, where you load all the data you need for a particular operation in one query using the Include method, can often result in more efficient database access. Here's an example:

var orders = _context.Orders
    .Include(order => order.Customer)
    .ToList();

In this example, each Order and its related Customer are loaded in a single query.

4.2 Use AsNoTracking for Read-Only operations

When you query data, EF Core automatically tracks changes to that data. This allows you to update the data and persist those changes back to the database. However, this change tracking requires additional memory and CPU time.

If you’re retrieving data that you don’t need to update, you can use the AsNoTracking method to tell EF Core not to track changes. This can result in significant performance improvements for read-only operations.

var orders = _context.Orders
    .AsNoTracking()
    .ToList();

4.3 Batch operations

EF Core 5.0 and above support batch operations, meaning it can execute multiple Create, Update, and Delete operations in a single round-trip to the database. This can significantly improve performance when modifying multiple entities.

_context.Orders.AddRange(orders);
await _context.SaveChangesAsync();

In this example, all the new orders are sent to the database in a single command, rather than one command per order.

4.4 Filter data on the server side

Try to filter data at the database level rather than in-memory to reduce the amount of data transferred and memory used. Use LINQ to create a query that the database can execute, rather than filtering the data after it’s been retrieved.

var orders = _context.Orders
    .Where(order => order.Date >= DateTime.UtcNow.AddDays(-7))
    .ToList();

In this example, only the orders from the last seven days are retrieved from the database.

4.5 Avoid Select N+1

The Select N+1 issue is a common performance problem where an application executes N additional SQL queries to fetch the same data that could have been retrieved in just 1 query. EF Core’s Include and ThenInclude methods can be used to resolve these issues.

var orders = _context.Orders
    .Include(order => order.Customer)
    .ThenInclude(customer => customer.Address)
    .ToList();

This query retrieves all orders, their related Customers, and the Addresses of the Customers in a single query.

4.6 Connection pooling

When your application needs to interact with a database, it opens a connection to the database, performs the operation, and then closes the connection. Opening and closing database connections are resource-intensive operations and can take a significant amount of time.

Connection pooling is a technique that can help mitigate this overhead. It works by keeping a pool of active database connections. When your application needs to interact with the database, it borrows a connection from the pool, performs the operation, and then returns the connection to the pool. This way, the overhead of opening and closing connections is incurred less frequently.

Connection pooling is automatically handled by the .NET Core data providers. For example, if you are using SQL Server, the SqlConnection object automatically pools connections for you.

When you create a new SqlConnection and call Open, it checks whether there's an available connection in the pool. If there is, it uses that connection. If not, it opens a new connection. When you call Close on the SqlConnection, the connection is returned to the pool, ready to be used again.

You can control the behavior of the connection pool using the connection string. For example, you can set the Max Pool Size and Min Pool Size options to control the size of the pool.

Conclusion

Optimizing the performance of your ASP.NET Core applications can be a challenging task, especially when you’re dealing with complex, data-rich applications. However, with the right strategies and tools at your disposal, it’s a task that’s well within your reach.

In this article, we’ve explored several key strategies for performance optimization, including understanding performance bottlenecks, leveraging asynchronous programming, utilizing different types of caching, compressing responses, optimizing Entity Framework Core usage, and taking advantage of advanced features such as connection pooling and HTTP/2.

The key takeaway here is that performance optimization is not a one-time event, but a continuous process that involves monitoring, analysis, and iterative improvement. Always be on the lookout for potential bottlenecks, and remember that sometimes the smallest changes can have the biggest impact.

Moreover, while we focused on ASP.NET Core, many of these principles and techniques apply to web development in general. So, even if you’re working in a different framework or language, don’t hesitate to apply these strategies. The ultimate goal of performance optimization is to provide a smooth, seamless experience for your users.

Happy coding, and here’s to fast, efficient applications!

Similar
Aug 15
The JavaScript fetch() method is used to fetch resources from a server. It returns a Promise that resolves to the Response object representing the response to the request. The fetch method can also make HTTP requests - GET request (to...
Nov 25, 2022
Author: Amit Naik
In this article, you will see a Web API solution template which is built on Hexagonal Architecture with all essential features using .NET Core. Download source code from GitHub Download project template from Microsoft marketplace Introduction This is kick-off project...
Jun 14
Author: codezone
Dependency injection is a powerful technique in software development that promotes loose coupling between components and improves testability and maintainability. When working with the HttpClient library in C#, integrating it with dependency injection can lead to cleaner and more manageable...
May 2, 2023
Microservice architecture is one of the most discussed software architecture trends at the moment, and it has forever changed the way enterprise applications are built. Rather than the slow, complex monolithic approach of the past, developers and companies everywhere are...
Send message
Type
Email
Your name
*Message