Поиск  
Always will be ready notify the world about expectations as easy as possible: job change page
Dec 16, 2023

Writing high performance .NET Code

Автор:
Источник:
Просмотров:
4310

At some point in our careers, many of us have found ourselves working on a project that requires performance tuning

The need to write high performance .NET code should come from performance issues or business needs.

Today, we’ll take a look at just a few of the ways we can improve our .NET application’s performance. And hopefully, you’ll take away something that you can use on your current and future products. We’ll cover several best practices for writing high performance .NET code and include examples.

C# Tutorial

But first things first. How do we even know where to begin?

Measure first

When considering any type of performance issues or optimizations, we must first measure the current state of our application performance. Without profiling our applications, we often fall into the trap of optimizing code that doesn’t need optimization.

We shouldn’t assume we know where the bottlenecks are. That’s even if we really, really know. Because we don’t. We only have theories of where the performance issues live, and we need to confirm those theories.

The first step should always be to measure. Then you’ll either confirm what you thought or learn something new. And then you’ll have your starting point without wasting time on performance tweaks that don’t make much of a difference.

In addition, you may find that perceived performance problems don’t actually exist. In other words, don’t worry about performance work unless an actual problem exists.

We often think we know where performance bottlenecks exist. However, we’re often wrong. So, measure your performance first before wasting time on making each class and method as performant as possible. As Donald Knuth said, “Premature optimization is the root of all evil.”

He was talking about optimizing without measuring. And knowing what in your application needs optimization.

How do we measure?

Visual Studio Professional provides great profiling options out of the box. Many other tools can provide profiling information. A short list of profilers follows:

  • Visual Studio Pro Performance Profiler
  • JetBrains dotTrace
  • Redgate ANTS

Whichever profiler you choose, it may take time to really understand a profiling tool and what it’s telling you. But it’s worth it — because then you’ll be able to focus your performance optimizations in the right place. When using profiler data, we should look at three things:

  1. Memory load on the system
  2. CPU
  3. Garbage collection

An important final note — make sure to profile using release mode and not debug mode. Using the release mode improves performance substantially.

Background info

Before we look at specific ways of improving performance, let’s discuss some necessary background info.

Garbage collection and memory management

Understanding garbage collection and memory management will push your understanding of performance to a new level. Take time to learn it. For today however, we’ll hit on some key points and practices that you should know.

Large object heap (LOH)

The .NET garbage collector cleans out resources that aren’t being used by your application. When the garbage collector processes objects in memory, it divides them up into small and large objects. As expected, the large object heap holds all the larger objects, defined as larger than 85kb.

On the small heap, the garbage collector also compacts the object to free up space as well as defragment the memory. As you may have guessed, it takes more resources to compact larger objects than smaller ones. Therefore, the garbage collector doesn’t defrag or compact memory.

In addition, garbage collection on the LOH isn’t as frequent as on the small object heap. Therefore, over time, it can reduce performance to have large objects that are fragmented over the heap.

Fortunately, there have been some improvements in how the LOH is managed in .NET Framework 4.5. However, we still want to reduce the allocation and collection costs of large objects in the LOH. To do that, we should write code in a way that avoids using large objects.

To avoid large objects, consider breaking code down into smaller objects. Also, minimize the use of large arrays or strings. Finally, avoid using application program interfaces (APIs) that allocate from the LOH. Typically, you’ll have to profile the heap allocations of the call to understand the API LOH allocations.

Understand just-in-time (JIT) compilation

With JIT compilation, your .NET code compiles into machine code upon first execution. Any future call to that logic in your application uses the previously-compiled machine code.

JIT has advantages. If a class or process rarely or never executes, why compile it into machine code? Moreover, code that executes together will usually be paged in the same place in memory or the processor cache. It also adds standard optimizations — like method inlining — for free.

However, we do take a hit the first time that our .NET compiles as machine code. So, what do we do?

Well, we should write smaller methods, for one. If we have a large method with multiple branches that execution can occur on, the JIT compilation still has to compile the entire method, even though much of the code may never be used.

In addition, we should understand how much code gets generated using common APIs, such as Language Integrated Query (LINQ). But we’ll get into that later. For now, let’s move on to the next section, where we look at specific changes we can make to our code.

Performance improvements

Now that we’ve covered the how and why of measuring performance, let’s roll up our sleeves and get to work. In this section, we’ll look at a few ways we can improve the performance of our application and gain understanding of why the performance issues can occur.

Exception handling

Exception handling degrades performance when it isn’t used properly. And liberal use of try, catch, and throw could indicate improper use. You see, the problem doesn’t typically involve catching the exception. It involves building out the exception stack trace. Therefore, we should only build a stack trace when necessary.

You may have heard that you shouldn’t control program flow. But that’s just one possible misuse. With try/catch in your code, you should also consider whether simple validation code makes more sense.

In addition, you may want to consider exceptions when choosing what APIs to use. If you have two options for functionality coming from APIs, select the one that doesn’t throw exceptions. For example, instead of calling Int32.Parse or DateTime.Parse, consider calling TryParse, which returns a bool when parsing fails.

To summarize this section:

Don’t throw exceptions as part of controlling program flow. Instead of throwing an exception, consider returning a validation error. Only log the exception stacks you truly need.

Now that we have the basics covered, let’s look at improvements we can make to our high performance .NET applications.

All about that string

We often need reminders of string’s immutability. If we attempt to mutate a string, we end up with two strings or more. And once we create a string, it hangs around until garbage collection clears it out.

Because of that, we should try to do a few things differently. Here are five actions we can take to improve our string processing.

1. Consider using a different representation of the data, such as buffers or streams.

2. Using the plus operator when concatenating strings where you have a defined quantity.

string message = "Hello " + userName + "!";

3. When concatenating strings dynamically in a loop, use a StringBuilder.

var sb = new System.Text.StringBuilder();
for (int i = 0; i < messages.Length; i++)
{
    sb.AppendLine(messages[i]);
}

return sb.ToString();

4. When comparing strings, use StringComparison.Ordinal whenever possible, as they cost less than ignoring case or comparing with current culture.

// Choose whenever possible
String.Compare(firstString, secondString, StringComparison.Ordinal);

// over more complex ways like
String.Compare(firstString, secondString, StringComparison.OrdinalIgnoreCase);

// or
String.Compare(firstString, secondString, StringComparison.CurrentCulture);

5. Take care when using split unnecessarily.

// instead of counting elements in a delimited string using split,
// which allocates a string for each chunk
myString.Split(",”).Count();

// count the delimiters to avoid splitting
int count = 0;
foreach(char c in myString)
{
  if (c == ',')
  {
    count++;
  }
}

Reflecting on bad code

Most of us know that using reflection leads to performance issues. However, there are times when we’re forced into it.

For example, a project I worked on years ago depended on components written by other teams in the organization. Because some of these components weren’t designed properly, we often found ourselves using reflection to get necessary data or functionality out of objects. Fortuantely, we didn’t have to implement high performance changes — our metrics showed that our application performed well enough.

And though more of an issue with management and silos, this example highlights how we may get stuck using reflection. Again, you should measure the effects on performance and use the data to support changes in either your code or another team’s code.

Regular expressions

In addition to being difficult to read, regular expressions can be a cause of performance issues.

Whenever we use regular expressions (regex), .NET will create a state machine to go over the input and match it against the regex pattern. In addition, JIT-generated code is often lengthy. Upgrades in your .NET framework can improve the JIT compilation, as improvements in newer versions help.

Other than using the latest, what should we do with our regex?

If using regex frequently, create an instance variable of regex instead of the static calls. Use the Compiled flag on your regex object. It specifies that the expression compiles to an assembly, which provides faster execution (but longer startup time).

Do not recreate your regex instance. Instead, create a static member variable that you can reuse.

For each versus for loop

When necessary, use standard for loops. Again, you should measure with your own scenario. Sometimes .NET converts foreach to basic for loops anyway. Other times, the performance of either doesn’t vary much.

This happens because the for each command forces casting to IEnumerable. And excessive casting can cause performance issues as well.

Considering LINQ

Previously, we discussed JIT compilation. As hinted to above, certain APIs mean more code being compiled than others.

Let’s take a look at LINQ as an example. LINQ looks so simple and clean that we all want to use it. It improves our productivity and reduces the amount of source code we maintain. However, it dynamically generates a lot of code upon JIT compilation that we may not know about. Sometimes the performance hit doesn’t hurt, but other times it does. Also, sometimes LINQ code provides better performance than our own algorithms. So again, we must measure it and keep an eye on memory allocation.

An example;

Here’s the LINQ implementation:

var results = (from kvp in dict where
  kvp.Key >= LowerBound &&
  kvp.Key <= UpperBound &&
  kvp.Value.Contains("1");

  select kvp.Value).ToList();

And here it is again using a simple for loop:

var results = new List<string>();

for (int i = lower; i <= upper; i++)
{
  if (dict[i].Contains("1"))
  {
    results.Add(dict[i]);
  }
}

The problem with the LINQ version surfaces because it results in a table scan. So, for this example, a simple for loop results in better performance. And as this example shows, high performance .NET code doesn’t always read as well as non-performant code. Therefore, we only want to apply optimizations like this where we need them.

Avoid blocking

Make sure that your program doesn’t waste resources while it waits for another process to complete. So, avoid blocking, whether that’s explicitly locking a thread or controlling thread synchronization in other ways. How can you tell if libraries and APIs block?

Easy.

If they return a task, they’re nonblocking. Of course, it could still cause blocking further in the API’s stack, but at least we know we’re not doing something that may cause performance issues.

In addition, if we’re calling an API that returns a task, don’t wait for it. Instead, use continuations.

// Instead of waiting
Task myTask = Task.Factory.StartNew(SomeOperation);
myTask.Wait();
Console.WriteLine(myTask.Result);

// use continuations
Task myTask = Task.Factory.StartNew(SomeOperation);
myTask.ContinueWith(t => { Console.WriteLine(myTask.Result) };

// or async/await, which uses Tasks and continuations underneath the hood
var myResult = Task.Run(async () => await SomeOperation()).Result;
Console.WriteLine(myResult);

Conclusion

That wraps up just a few of the ways we can make our product a high performance .NET application. But be aware that improving your application’s performance shouldn’t rely on applying performance optimization blindly. You may waste time optimizing the wrong thing and not fixing the performance issues hurting your customers.

Похожее
Nov 18, 2020
Microsoft’s second release candidate of .NET 5 arrived October 13, bringing the merger of .NET Framework and .NET Core one step closer to completion. The new unified .NET platform is due for general availability November 10, 2020. Microsoft describes Release...
Feb 23
Author: Juldhais Hengkyawan
During interviews for senior .NET developer positions, technical questions are often asked to test skills and understanding. This article contains seven commonly asked questions based on my experience in various .NET developer interviews, both as an interviewer and a candidate....
Jul 18
Author: Vinod Pal
In today’s fast-paced business environment, efficiency and responsiveness are crucial for maintaining a competitive edge. Background batch processing plays a pivotal role in achieving these goals by handling time-consuming tasks asynchronously, thereby freeing up system resources and ensuring a seamless...
Jun 3
Author: Dayanand Thombare
Introduction Delegates are a fundamental concept in C# that allow you to treat methods as objects. They provide a way to define a type that represents a reference to a method, enabling you to encapsulate and pass around methods as...
Написать сообщение
Тип
Почта
Имя
*Сообщение