Advertisement
Search  
Always will be ready notify the world about expectations as easy as possible: job change page
Aug 15, 2023

ASP.NET Core: How to Maximize Performance and Scalability of Your App

Source:
Views:
1539

ASP.NET Core: How to Maximize Performance and Scalability of Your App

Whether you have an app with just a few users or millions of users per day, like Agoda, improving the user experience by optimizing application performance is always crucial.

In the case of very high-traffic websites in the cloud, this optimization can translate into significant cost savings by reducing the number of required app instances. During Agoda’s transition from ASP.NET 4.7.2 to ASP.NET Core 3.1, our primary focus was on achieving this goal.

We understand that different businesses require different levels of optimization in different categories. So, we’ve curated a comprehensive list of server-side optimizations. This list ranges from easy, quick wins to more intricate low-level micro-optimizations, allowing you to extract every ounce of performance and scalability from your application setup.

Seven server-side optimizations that can help you maximize performance and scalability from your application setup

1. Reduce the number of databases and external API calls your application has to make.

Database calls and API calls tend to be inherently slow. In some cases, the slowness of these operations can severely impact the performance of the application being developed. To address this issue, we recommend implementing analytics or logging mechanisms to monitor the speed of your database and API calls. By doing so, you can assess the extent of the slowness and determine whether these calls are necessary or if they can be minimized.

public async Task<IEnumerable<Person>> GetMePeopleAsync(IEnumerable<int> peopleIds)
{
    var people = new List<Person>();
    foreach (var id in peopleIds)
    {
        var person = await _someDbRepository.GetPerson(id);
        people.Add(person);
    }

    return people;
}

Assuming your API and database communications are already used sparingly, the next step is to explore further ways of reducing their frequency by leveraging caching. ASP.NET Core provides a convenient solution called IMemoryCache, which is user-friendly and straightforward to implement. However, it’s essential to be aware of the pros and cons associated with using IMemoryCache.

Pros: Storing and retrieving data is incredibly fast and user-friendly.
Cons: If you have multiple servers running the application, cache misses will be common, in these scenarios distributed caching is recommended. MemoryCache uses the server’s RAM, so be cautious how much data you put there.

2. Use Async versions of methods whenever possible

Let’s debunk a crucial misconception: using async does not automatically make your application faster. In fact, in lower traffic web apps, there might be a slight dip in performance (usually less than 1%) due to the introduction of state machines. So, what is the true purpose of async/await? The answer is scalability.

In a synchronous implementation, each request to your application is handled by a dedicated thread. When your application needs to make a database call, API call, or any other I/O operation, the thread has to wait for the external system to respond, resulting in inefficiency. Wouldn’t it be great if we could utilize that idle thread somehow?

Let’s talk about threads in .NET for a moment. Threads are an OS abstraction used to schedule tasks on the CPU. However, creating threads is slow and expensive, which is why .NET Framework employs a ThreadPool to maintain a list of reusable threads for the application. To optimize performance, it’s essential to minimize reaching an empty ThreadPool as much as possible.

This is where async/await comes in handy. By using async/await methods, we allow the .NET Framework to return the executing thread to the thread pool for further reuse until the response from the external I/O becomes available. This can significantly boost the maximum throughput of the server. Moreover, it opens up the door for further optimization, as we’ll see in the next point.

3. Use your asynchronous methods wisely

The async/await pattern enables us to initiate a task without delay, thus offering further optimization. Take a look at the following example:

public async Task<Result> GetCustomerDataAsync(int companyId)
{
    var basicData = await GetBasicDataAsync();
    var rewardsData = await GetRewardsDataAsync();
    var purchaseData = await GetPurchaseDataAsync();
    
    return CombineResultsSomehow(basicData, rewardsData, purchaseData);
}

This type of code is very common, requiring multiple pieces of data to execute the next operation. However, there is significant room for improvement in its efficiency. Let’s rewrite it like this:

public async Task<Result> GetCustomerDataAsync(int companyId)
{
    var basicData = GetBasicDataAsync();
    var rewardsData = GetRewardsDataAsync();
    var purchaseData = GetPurchaseDataAsync();
    
    return CombineResultsSomehow(await basicData, await rewardsData, await purchaseData);
}

This change looks simple but makes a significant difference. Here, we are launching GetBasicDataAsync, GetRewardsDataAsync, and GetPurchaseDataAsync without waiting for their results. This allows us to achieve a kind-of parallelism. We choose to await the results only when we need them. The rule of thumb here is not to await as soon as you launch an async operation; await only when you need the result.

Take a look at this blog for a deep dive into async/await topics Stephen Cleary’s blog.

4. If you need to use HttpClient, use it properly

The HttpClient class in .NET makes it easy to make calls to various APIs, but too often, it isn’t used properly. Usually like this:

using(var client = new HttpClient())
{
    // do something with http client
}

The problem with this approach is that under load, this application will exhaust the number of available sockets very quickly. This is because we keep opening and closing the connections rapidly. Using this approach cripples the throughput of the server.

A better approach would be to reuse HttpClient when contacting the same server. This will allow the application to reuse the sockets for multiple requests. If you use .NET Core 2.2, the easiest way to handle this is by using HttpClientFactory. Inject IHttpClientFactory into your service, and it will take care of the above issues behind the scenes.

public Task<string> GetStackoverflow()
{
    var client = _httpClientFactory.CreateClient();
    var result = await client.GetStringAsync("http://www.stackoverflow.com");

    return result;
}

For a deeper explanation on this topic, look at this blog post.

5. If you use Razor pages, make use of <Cache> Tag Helper

Tag helpers were introduced in ASP.NET core and are a more convenient version of Html Helpers. One of these helpers is the <cache> tag, which simplifies rendering and caching specific parts of a page directly to MemoryCache. Here are some simple examples:

<html>
  <cache expires-after="@TimeSpan.FromSeconds(300)>
    <head>
      @{ await Html.RenderPartialAsync("essentialCss"); }
      @{ await Html.RenderPartialAsync("essentialScripts"); }
      @{ await Html.RenderPartialAsync("thirdpartyScripts"); }
    </head>
  </cache>
<body>
  <div class="react-root" id="root"></div>
  <cache vary-by="@Model.LanguageId">
    @{ await Html.RenderPartialAsync("languageSpecificScripts", Model.LanguageId); }
  </cache>
</body>
</html>

These tag helpers provide plenty of options, so if you use razor pages, you should look into what they have to offer.

6. Consider using gRPC for calling your backend services

If your web application makes REST API calls to various (micro)services, it may be beneficial to switch your mode of communication to gRPC. Developed by Google, this model combines the benefits of the new HTTP/2 protocol and binary payload for improved communication performance.

In a nutshell, there are two reasons for this improvement. First, HTTP/2 allows for far better connection management; multiple calls to the same server can be made using a single connection, improving server throughput under stress. Second, due to the binary nature of the payload, serialization/deserialization cost is almost non-existent, which reduces CPU overhead for each call.

7. Reducing Memory Allocations

Until now, our focus has been optimizing the speed and efficiency of our web application’s communication with other systems. However, it’s time to shift our attention towards making the application perform even faster and smoother.

Let’s talk about Garbage Collection — a task nobody enjoys, not even computers. During Garbage Collection, the CPU has to put in extra effort, causing most operations to experience a momentary pause while the trash is taken out. To enhance server performance, reducing the duration and frequency of these interruptions becomes crucial. When optimizing Garbage Collection, it’s important to keep a few principles in mind:

  • When server memory is running low, garbage collection will happen more frequently and more aggressively.
  • The longer an object stays alive in memory, the less frequently it will be checked upon (whether it still needs to be in memory or not). This can cause RAM usage to stay higher for longer.
  • When a large number of objects is dereferenced in memory. Garbage Collector will compact the memory to reduce fragmentation. This process is slow, and we should aim to avoid this.

We can simplify the above principles into something more digestible:

“Create fewer objects, and try to keep them small.”

There are several ways of doing this; here are a few examples:

Preallocate Generic Collections: If you use C#, use Lists<> and Dictionaries<>. They are convenient and easy to use, but they are not magic; behind the scenes Lists use Arrays, and Dictionaries use HashTables. What do Arrays and HashTables have in common? They are immutable… But wait, if they are immutable, how can we use coolGuyList.Add(myFriend)? They are indeed immutable, so often when you call Add(), the runtime creates a new bigger array and copies the objects over from the old smaller array. (Last time we checked, the runtime doubles the underlying array’s size every time it creates a bigger one; something along the lines of 0, 4, 8, 16…). Imagine you are populating a big List in a foreach loop; it sounds kind of inefficient right?
Thankfully there is an easy way to optimize. Instead of the runtime guessing how big of an array to allocate, we can tell the runtime how big of an array we are expecting:

var persons = new List<Persons>(peopleDbResult.Count);  //we know how big this list will be, so we preallocate
foreach (var personEntity in peopleDbResult)
{
  var personViewModel = PersonViewModelMapper.Map(personEntity);
  persons.Add(personViewModel);
}

var personsMap = new Dictionary<string, Person>(peopleDBResult.Count); //same approach can be used for Dictionary<,>
foreach (var personEntity in peopleDbResult)
{
  var personViewModel = PersonViewModelMapper.Map(personEntity);
  personsMap.Add(personViewModel.Id, personViewModel);
}

In certain situations, it’s challenging to predict the required size of an Array/HashTable. However, through experience, we can predict the size accurately in approximately half of these cases.

Consider using Structs over Classes: This is a trickier optimization that will need some bench-marking/stress-testing. In some cases when a large number of objects is created and destroyed, it may be beneficial to switch classes to structs because they are value types and do not have pointers to maintain, making Garbage Collector’s life a bit easier.

Reuse Lists and Dictionaries: Although this is not too common in code, sometimes we need multiple instances of a List, Dictionary, etc.…to complete a specific task, and sometimes these collections are Generics of the same type. In such cases, once we are done with one collection, we can call list.Clear() and reuse the same collection again for another operation. This can improve performance because Clear() does not shrink the underlying array but de-references the contained items allowing us to reuse the underlying array for our further needs.

Conclusion

These are some of the server-side optimizations that can help you maximize performance and scalability, most of which were implemented during Agoda’s migration effort to ASP.NET core.

Similar
Jul 25, 2023
Author: Anthony Trad
Bending the Clean Architecture PrinciplesAsync await memeIntroductionImagine you’re a chef in a kitchen full of ingredients, some fresh, some a bit past their prime, all thanks to Microsoft’s “We never throw anything away” policy. This is what programming asynchronously in...
Feb 12
Author: Kartik
Application Insights - Telemetry1. How do I instrument (monitor/record/enabling to capture telemetry) an application?Autoinstrumentation - if you don’t have access to source codeYou only need to install the Application Insights SDK if: You require custom events and...
17 апреля
Рассмотрим интересную задачу по разработке игры «Крестики Нолики» на языке C#. Наш проект будет запускаться в консоли и потребует креативное мышление для решения задачи. Ваша задача — реализовать консольную игру "крестики-нолики" с использованием языка программирования C#. Вам нужно создать игровое поле,...
Feb 20
Author: Salman Karim
Background tasks are crucial in many software applications, and scheduling them to run periodically or at specific times is a common requirement. In the .NET ecosystem, Quartz.NET provides a robust and flexible framework for scheduling such tasks. Let’s delve into...
Send message
Email
Your name
*Message


© 1999–2024 WebDynamics
1980–... Sergey Drozdov
Area of interests: .NET Framework | .NET Core | C# | ASP.NET | Windows Forms | WPF | HTML5 | CSS3 | jQuery | AJAX | Angular | React | MS SQL Server | Transact-SQL | ADO.NET | Entity Framework | IIS | OOP | OOA | OOD | WCF | WPF | MSMQ | MVC | MVP | MVVM | Design Patterns | Enterprise Architecture | Scrum | Kanban