Always will be ready notify the world about expectations as easy as possible: job change page
Jan 29

Top 10 .NET Core performance tricks


Performance tricks & best practices, beginner friendly

.NET Core Performance

Hey there! Today, I’m sharing some straightforward ways to speed up your .NET Core apps. No fancy words, just simple stuff that works.

Let’s dive in!

• • •

1. Asynchronous programming

Asynchronous programming in .NET Core is a powerful way to enhance the scalability and responsiveness of your applications.

It allows your program to handle other tasks while waiting for operations, like I/O processes, to complete.

When to use asynchronous programming:

  • I/O bound operations: Use async programming for operations that involve data access, file reading/writing, network calls, etc., where the program waits for an external process to complete.
  • UI responsiveness: In desktop applications, use async to keep the UI responsive during long-running tasks.
  • Scalability in web applications: Use async in web applications to handle more requests by freeing up threads while waiting for database queries or API calls.

Best practices:

  • Use ConfigureAwait(false) when awaiting tasks in a library to avoid deadlocks.
  • Always catch exceptions in asynchronous methods to prevent unhandled exceptions.
  • When writing asynchronous APIs, expose asynchronous methods alongside synchronous ones, if possible.

Code example:

public async Task<ActionResult> GetUserData()
    var userData = await _userService.GetUserDataAsync();
    return View(userData);

public async Task<string> ReadFileContentAsync(string filePath)
    using (var reader = new StreamReader(filePath))
        return await reader.ReadToEndAsync();

When not to use asynchronous programming:

  • CPU-bound Operations: If the task involves intensive computation, async programming might not bring benefits and can complicate your code.
  • Simple Synchronous Tasks: For operations that complete quickly and are not blocking, asynchronous programming can be overkill.

Common mistakes:

  • Blocking on Async Code: Avoid calling .Result or .Wait() on an async method, as it can lead to deadlocks.
  • Overusing async void: Prefer async Task over async void for better error handling and control.
  • Ignoring the Returned Task: When you call an async method, ensure that you handle its Task properly, either by await-ing it or managing it as a part of a larger asynchronous workflow.

• • •

2. Optimize data access

In .NET Core, optimizing data access is crucial for building efficient applications, especially when dealing with databases or external data sources.

It involves strategies to reduce database load, minimize network traffic, and speed up data retrieval.

When to optimize data access:

  • Large datasets: Optimize when working with large volumes of data to reduce memory consumption and improve load times.
  • Frequent database interactions: Apply optimization for applications that frequently interact with a database.
  • Performance-critical applications: Essential for applications where speed and efficiency are critical, like real-time data processing systems.

Code example:

var customers = dbContext.Customers
                  .Where(c => c.IsActive)
                  .Select(c => new { c.Name, c.Email })

When not to use data access optimization:

  • Small datasets: For small datasets, where the performance impact is negligible.
  • Simple CRUD operations: Basic operations on a single table with a small number of records might not need optimization.

Common mistakes:

  • Over-fetching data: Retrieving more data than needed can slow down your application.
  • N+1 query problem: This occurs when your code executes one query to fetch a set of records and then additional queries for each record.
  • Ignoring query execution plans: Not reviewing how queries are executed can lead to missed optimization opportunities.

• • •

3. Utilize caching

Caching is an efficient way to store and retrieve data quickly.

In .NET Core applications, caching can significantly improve performance by reducing the need to repeatedly access slow resources like databases or external services.

When to use caching:

  • Frequently accessed data: Use caching for data that is requested often and doesn’t change frequently. This includes static configuration data, user session information, or frequently accessed database queries.
  • Read-heavy workloads: In scenarios where your application reads data more often than it writes or updates it, caching can greatly reduce the load on your data source and speed up response times.

Code example:

public class MemoryCacheService
    private readonly IMemoryCache _memoryCache;

    public MemoryCacheService(IMemoryCache memoryCache)
        _memoryCache = memoryCache;

    public T GetOrCreate<T>(object key, Func<T> createItem)
        if (!_memoryCache.TryGetValue(key, out T cacheEntry))
            cacheEntry = createItem();
            _memoryCache.Set(key, cacheEntry);
        return cacheEntry;

In this example, GetOrCreate method checks if the item is already in the cache. If not, it uses the provided createItem function to retrieve the data and adds it to the cache.

When not to use caching:

  • Rapidly changing data: Avoid caching data that changes frequently, as the main benefit of caching is reduced resource access for unchanging data.
  • Large data sets: Caching very large data sets can consume significant memory resources. Be cautious and only cache data that provides performance benefits.

Common mistakes:

  • Overcaching: Caching too much data can lead to increased memory usage and potential memory leaks. Only cache what you need.
  • Ignoring cache expiration: Not setting an expiration policy for cache items can lead to outdated data being served. Use absolute or sliding expiration policies wisely.
  • Not handling cache misses: Always code for scenarios where the data is not found in the cache, and ensure that it can be retrieved from the original source and then cached for future use.

• • •

4. Reduce memory allocations

In .NET Core, memory allocation refers to the process of reserving memory space for variables and objects during the runtime of an application.

Efficient memory allocation is crucial for enhancing performance, especially in high-load scenarios.

When to focus on reducing memory allocations:

  • High-performance applications: In scenarios where performance is critical, such as high-frequency trading systems or real-time data processing applications.
  • Large scale applications: When dealing with large scale applications, where small inefficiencies can scale up to significant resource consumption.
  • Long running processes: In applications that run for extended periods, such as web servers or background services, to prevent gradual increase in memory usage (memory leaks).

Strategies for reducing memory allocations:

  • Use value types where appropriate: Value types (like structs in C#) are allocated on the stack and can be more efficient than reference types (like classes) for small, immutable data.
  • Pooling resources: Utilize object pooling for frequently used objects to avoid the overhead of repeatedly creating and destroying instances.
  • StringBuilder for string concatenation: Use StringBuilder when concatenating strings in loops or iterative processes.

Code example:

var builder = new StringBuilder();
for (int i = 0; i < 100; i++)
string result = builder.ToString();

When not to over-optimize:

  • Premature optimization: Avoid optimizing memory allocation at the expense of code readability and maintainability, especially when dealing with non-performance-critical code paths.
  • Small or short-lived applications: For applications with short runtimes or low resource requirements, the benefits of optimization may not justify the effort.

Common mistakes:

  • Misusing object pools: Implementing object pooling where it’s not needed can complicate the code without substantial benefits.
  • Overusing value types: Excessively using value types, especially large structs, can lead to performance issues, as value types are copied by value, not by reference, which can be costly.
  • Ignoring garbage collector performance: Overlooking the impact of your memory usage patterns on the garbage collector can lead to suboptimal performance. It’s important to understand how garbage collection works in .NET Core to write memory-efficient code.

• • •

5. Implement efficient logging

Logging is a crucial aspect of any application for monitoring its behavior, diagnosing issues, and understanding its performance characteristics.

Efficient logging strikes a balance between capturing sufficient detail and avoiding excessive data that can overwhelm the system and make logs hard to use.

When to implement logging:

  • Error tracking: Log errors and exceptions to diagnose and fix issues.
  • User activity monitoring: For tracking user actions, especially in areas critical for security and auditing.
  • Performance metrics: To monitor application performance and identify potential bottlenecks.
  • Debugging: During development and testing phases to trace code execution and spot anomalies.

When to limit logging:

  • High-performance scenarios: In performance-critical code paths, minimize logging to avoid performance degradation.
  • Sensitive information: Avoid logging sensitive information such as passwords or personal user data to ensure security compliance.

Code example:

public class MyService
    private readonly ILogger _logger;

    public MyService(ILogger<MyService> logger)
        _logger = logger;

    public void ProcessData()
            // Processing logic
            _logger.LogInformation("Data processed successfully.");
        catch (Exception ex)
            _logger.LogError(ex, "Error processing data");

Common mistakes:

  • Overlogging: Logging too much information can lead to large log files, making it difficult to find relevant information and potentially impacting application performance.
  • Inconsistent logging levels: Not using appropriate logging levels (e.g., Info, Debug, Error) can make it challenging to filter logs for relevant information.
  • Ignoring structured logging: Failing to use structured logging can make automated analysis and querying of log data more difficult.

Best practices:

  • Use structured logging: This allows for easier filtering and querying of log data. For example, use logging frameworks like Serilog for .NET Core.
  • Centralize logs: Use tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk for centralizing and analyzing logs from multiple sources.
  • Asynchronous logging: Consider using asynchronous logging to minimize the impact on application performance.

• • •

6. Use response compression

Response compression in .NET Core is a technique used to reduce the size of HTTP responses sent from a server to a client.

By compressing responses, you can improve your application’s load times and reduce bandwidth usage, especially important for high-traffic web applications and services.

public void ConfigureServices(IServiceCollection services)
    services.AddResponseCompression(options =>
        options.MimeTypes = ResponseCompressionDefaults.MimeTypes.Concat(new[] { "application/json" });

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    // other middleware

When to use response compression:

  • Text-based content: It’s most effective for text-based content like HTML, CSS, and JavaScript, which can often be significantly compressed.
  • API responses: Compress API responses, particularly when returning large JSON or XML data sets.
  • High traffic applications: Use it in scenarios where bandwidth is a constraint, or network performance is a concern.

When not to use response compression:

  • Already compressed content: Avoid compressing binary content such as images or videos, as they are usually already compressed.
  • Small response sizes: Compression may not be beneficial for very small responses, as the overhead of compressing the response might outweigh the benefits.
  • Sensitive data: Be cautious when compressing sensitive information, as it can sometimes make the data more susceptible to security vulnerabilities like BREACH attacks.

Common mistakes:

  • Neglecting compression quality settings: Not configuring the level of compression. Different levels can provide better compression at the cost of CPU usage
  • Overlooking client compatibility: Assuming all clients support the same compression algorithms. Ensure to check the client’s ‘Accept-Encoding’ header to apply a compatible compression format.
  • Ignoring HTTPS scenarios: While response compression is beneficial, it needs careful consideration when used over HTTPS due to potential security vulnerabilities.

• • •

7. Optimize LINQ queries

LINQ (Language Integrated Query) is a powerful feature in .NET Core that allows developers to query various data sources in a readable and concise way.

However, inefficient LINQ queries can lead to performance bottlenecks, especially with large datasets or complex operations.

When to optimize LINQ queries:

  • Working with large datasets: When querying large datasets, inefficient queries can significantly slow down your application due to excessive memory usage or prolonged execution time.
  • Database operations: While querying databases, especially with Entity Framework Core, ensure queries are optimized to prevent unnecessary data retrieval and to utilize SQL Server optimizations.
  • Performance-critical applications: In scenarios where performance is key, such as real-time data processing or high-load web applications, optimizing LINQ queries is essential.

When not to over-optimize LINQ queries:

  • Simple data processing: For small in-memory collections or simple data processing tasks, extensive LINQ optimization might not yield significant benefits.
  • Readability over performance: Sometimes, a more complex LINQ query can be more readable. If performance isn’t a major concern, prioritize code clarity.

Code example:

var activeCustomers = dbContext.Customers
                               .Where(c => c.IsActive)
                               .Select(c => new { c.Name, c.Email })

This query retrieves only the necessary fields (Name and Email) for active customers, reducing the amount of data transferred and processed.

Common mistakes:

  • Retrieving all data: Using .ToList() or similar methods to retrieve all records from the database before applying filters or transformations.
  • Multiple queries in loops: Executing LINQ queries within loops can lead to multiple database calls, greatly impacting performance.
  • Ignoring deferred execution: Not understanding deferred execution can lead to inefficient query execution and unexpected results.

Best practices:

  • Filter first: Apply Where clauses before other operations to reduce the amount of data processed.
  • Use projection: Use Select to retrieve only the required fields.
  • Avoid premature materialization: Don’t convert queries to lists or arrays until necessary.
  • Understand deferred execution: LINQ queries are not executed until the data is actually needed (like calling .ToList()), which can be used to your advantage for efficiency.

• • •

8. Use lightweight objects in APIs

In .NET Core APIs, using lightweight objects, often referred to as Data Transfer Objects (DTOs), is a best practice for optimizing network traffic and improving the performance of web services.

DTOs are simple, serialized objects that are used to transfer data between different layers or parts of an application, particularly over network calls.

When to use lightweight objects:

  • Data transfer between client and server: Use DTOs when you need to send data from the server to the client or vice versa, especially over the network.
  • Exposing a subset of domain model: When you need to expose only a part of your domain model or aggregate data from various sources into a single object.
  • Performance optimization: In scenarios where minimizing the size of the request and response payloads is critical for performance.

When not to use DTOs:

  • Internal application logic: For internal logic within the server where you need to work with the full domain model, DTOs might not be necessary.
  • Simple CRUD operations: If your API closely mirrors your database schema with simple CRUD (Create, Read, Update, Delete) operations, and there are no security or bandwidth concerns, using your domain models directly might be simpler.

Code example:

public class CustomerDto
    public string Name { get; set; }
    public string Email { get; set; }
    // Other relevant data fields

// Mapping domain model to DTO
public CustomerDto MapToDto(Customer customer)
    return new CustomerDto
        Name = customer.Name,
        Email = customer.Email
        // Map other fields

Common mistakes:

  • Exposing sensitive data: Including sensitive data in DTOs that are sent to the client.
  • Overcomplicating DTOs: Making DTOs too complex or including unnecessary data that increases the payload size.
  • Ignoring AutoMapper: Not leveraging object-to-object mapping libraries like AutoMapper, leading to verbose and error-prone mapping code.

Best practices:

  • Use AutoMapper for mapping: Use libraries like AutoMapper to simplify the mapping between domain models and DTOs.
  • Design DTOs based on use-case: Tailor the structure of your DTOs to fit the specific use-case or API endpoint requirements.
  • Validate DTOs: Implement validation on DTOs to ensure the integrity of data being transferred.

• • •

9. Minimize use of reflection

Use reflection sparingly as it’s computationally expensive.

if (typeof(MyClass).GetMethod("MyMethod") != null)
    // Method exists logic

Frequent use of reflection can significantly slow down your application.

• • •

10. Profile and monitor your application

Regularly profile your application to identify performance bottlenecks.

Use tools like Visual Studio Diagnostic Tools.

Neglecting regular performance checks can lead to unnoticed efficiency issues.

• • •

Improving your .NET Core application’s performance isn’t just about coding — it’s a mindset.

By adopting these top 10 performance tricks, you’re not just writing code; you’re crafting efficient, scalable, and robust applications.

Remember, the best practices shared here are your toolkit for success.

Nov 9, 2020
Author: Akhil Mittal
MicroservicesThe term microservices portrays a software development style that has grown from contemporary trends to set up practices that are meant to increase the speed and efficiency of developing and managing software solutions at scale. Microservices is more about applying...
Feb 5
Author: Crafting-Code
Lesser-known techniques for optimizing SQL queries in a high-traffic application, especially when dealing with complex joins and indexing challengesThe success of high-traffic applications hinges on the efficient performance of their underlying databases. As the complexity of SQL queries grows, especially...
Jan 8
Author: Dr Milan Milanović
“Application Programming Interface” or API, refers to a communication channel between various software services. Applications that transmit requests and responses are called clients and servers, respectively.There are different types of API protocols: REST — relies on a...
Nov 21, 2022
Author: Andrea Arrigo
A well-named Git repository can make a big difference in your development workflow. Here are 10 best practices to follow.A well-named Git repository can save you a lot of time and headaches down the road. A good name should be...
Написать сообщение

© 1999–2024 WebDynamics
1980–... Sergey Drozdov
Area of interests: .NET Framework | .NET Core | C# | ASP.NET | Windows Forms | WPF | HTML5 | CSS3 | jQuery | AJAX | Angular | React | MS SQL Server | Transact-SQL | ADO.NET | Entity Framework | IIS | OOP | OOA | OOD | WCF | WPF | MSMQ | MVC | MVP | MVVM | Design Patterns | Enterprise Architecture | Scrum | Kanban
Donate to support the project
GitHub account
GitHub profile