← Back to Home

Benchmarking 'cn': Why `clsx` is Still King

If you are building modern React applications, especially with Tailwind CSS, you have likely encountered the cn() utility. It’s the standard helper function (popularized by tools like shadcn/ui) used to merge class names conditionally.

Typically, it looks like this:

import { clsx } from "clsx"
import { twMerge } from "tailwind-merge"

export function cn(...inputs) {
  return twMerge(clsx(inputs))
}

But as developers, we often wonder: "Do I really need a library for this? Can't I just write a simple loop?"

I decided to put this to the test. I benchmarked four different implementations of a class merging function against each other to see if native implementations could beat the library. The results were surprising.

The Contenders

1. The "Array Push" (Naive)

This is the most intuitive implementation. We iterate through arguments, check types, push valid classes to an array, and .join(" ") them at the end.

View implementation
const cn = (...classes) => {
  const result = [];
  for (const cls of classes) {
    if (typeof cls === "string") {
      result.push(cls);
    } else if (typeof cls === "object" && cls !== null) {
      Object.entries(cls).forEach(([key, value]) => value && result.push(key));
    }
  }
  return result.join(" ");
};

2. The "Functional" (Map/Filter)

The modern one-liner approach. It looks clean but relies heavily on array methods like map and filter, creating intermediate arrays for every operation.

View implementation
const cn = (...classes) => {
  return classes
    .map(cls => {
      if (typeof cls === "string") return cls;
      if (typeof cls === "object" && cls !== null) {
        return Object.entries(cls)
          .filter(([_, value]) => value)
          .map(([key]) => key)
          .join(" ");
      }
      return "";
    })
    .filter(Boolean)
    .join(" ");
};

3. The Library (clsx)

The standard clsx library implementation.

View usage
import { clsx } from "clsx";

// Usage is the same as our implementations
const cn = clsx;

// Or if combined with tailwind-merge
import { twMerge } from "tailwind-merge";
export function cn(...inputs) {
  return twMerge(clsx(inputs));
}

4. The "String Concatenation" (Optimized)

A custom implementation we wrote that mimics clsx's strategy: avoiding arrays entirely and just appending to a string variable.

View implementation
const cn = (...classes) => {
  let result = "";
  for (const cls of classes) {
    if (typeof cls === "string") {
      if (cls) {
        result += (result && " ") + cls;
      }
    } else if (typeof cls === "object" && cls !== null) {
      for (const [key, value] of Object.entries(cls)) {
        if (value) {
          result += (result && " ") + key;
        }
      }
    }
  }
  return result;
};

The Benchmark Results

I ran the benchmark for 100,000 iterations (with 5 runs and warmup phase) across test cases ranging from simple strings to complex objects with dozens of conditional classes. The results were consistent and clear.

Test Case clsx String Concat Array Push Map/Filter
Short (3-5 classes) 26.53ms
(1.0x)
34.32ms
(1.3x)
97.38ms
(3.7x)
120.08ms
(4.5x)
Long (10-15 classes) 88.40ms
(1.0x)
111.88ms
(1.3x)
290.23ms
(3.3x)
393.12ms
(4.4x)
Very Long (20-35 classes) 160.39ms
(1.0x)
236.15ms
(1.5x)
520.07ms
(3.2x)
668.26ms
(4.2x)

What the numbers tell us: clsx is the clear winner across all scenarios. The performance gap remains consistent: it's roughly 30-50% faster than our optimized String Concatenation approach, 3-4x faster than the naive Array Push method, and 4-5x faster than the functional Map/Filter approach.

Interestingly, the relative performance stays remarkably stable regardless of input size. Whether you're merging 5 classes or 35 classes, the multipliers remain consistent—clsx maintains its lead.

Why is `clsx` so fast?

The difference was massive. Looking at the source code and profiling the execution, the performance gap boils down to one thing: Memory Allocation.

1. Avoiding `Object.entries`

Both of our native implementations (Array Push and Map/Filter) relied on Object.entries(obj) to process conditional objects.
Object.entries is expensive. It has to allocate a new array containing sub-arrays for every key-value pair (e.g., [['btn', true], ['active', false]]).

clsx, on the other hand, uses a simple for (key in obj) loop. This iterates directly over properties without creating any intermediate arrays, keeping the Garbage Collector idle.

2. Direct String Concatenation

Our "Array Push" method builds an array ['btn', 'active'] and then calls .join(' ').
clsx just does str += ' ' + key. Modern JavaScript engines (like V8) are incredibly good at optimizing string concatenation. By avoiding the overhead of managing array size and indices, it shaves off critical microseconds.

Conclusion

The numbers speak for themselves. The clsx package is tiny (239B minified + gzipped), has no dependencies, and is micro-optimized for performance. The custom "String Concatenation" version came closest at 1.3-1.5x slower, but I essentially just rewrote clsx poorly—and it showed higher performance variability (7.86% CV vs clsx's 2.20% CV).

The naive implementations were significantly slower: the Array Push method was 3-4x slower, and the functional Map/Filter approach was 4-5x slower. When cn() might be called thousands of times during renders or class name calculations, these differences compound.

Bottom line: Don't reinvent the wheel—especially when the wheel is this fast, this small, and this battle-tested.

PS: Want to run the benchmark yourself? Download the benchmark script and run it locally with node benchmark-cn.js (or node --expose-gc benchmark-cn.js for better GC control).