I am based in Cairo, Egypt
Karim Shalapy in Open Air Mall Karim Shalapy wearing a suit on the beach

While working on one of the tasks I wanted to have a function that splits a big array into smaller arrays of a fixed size. I approached the problem with the most intuitive solution and then started optimizing it, when I profiled both of the approaches to find out that the first implementation was at least 9 times faster than the new one.

Here I will discuss why that happened and why the time complexity using the big O notation isn’t really an accurate representation of the performance of a piece of logic.

Difference In Implementations

Let’s discuss together the different approaches that I mentioned previously. But first, let’s name this function that we’re building together chunk().

Using Array.slice()

The first, and most intuitive approach that came to mind is to use one of the native utility methods on the Array.prototype which is Array.slice(). Here’s my version of implementing it:

function chunk(arr, size) {
  const result = [];
  for (let index = 0; index < arr.length; index += size) {
    result.push(arr.slice(index, index + size));
  }
  return result;
}

benchmark(chunk); // ~7ms

If we laser-focus on this loop, and knowing that Array.slice() is O(k) where k is the size of the chunk, and we’re using it inside of a loop that loops m times where m is the count of the chunks, so it’s O(k × m) and if we simplify that, it becomes O(n²).

for (let index = 0; index < arr.length; index += size) {
  result.push(arr.slice(index, index + size));
}

But even knowing that it’s O(n²), the benchmark returns that it runs in around 7 milliseconds, which is impressive, but surely we can do better, no?

⚠️ Hold that thought: This O(n²) classification deserves a closer look, which we’ll examine in the “Is It Really O(n²)?” section below. For now, let’s proceed with this assumption.

Doing It Better?

So, it’s O(n²), it’s very easy to beat, I am sure we can do it in O(n), and we’ll get them juicy performance gains.

So, instead of calling Array.slice(), we’ll try to implement it in one iteration with everything inside the iteration is only O(1).
Here’s an implementation of that.

function chunk(arr, size) {
  const result = [];

  for (let index = 0; index < arr.length; index++) {
    const chunkIndex = Math.floor(index / size);
    const itemIndexInChunk = index % size;

    if (!result[chunkIndex]) result[chunkIndex] = [];

    result[chunkIndex][itemIndexInChunk] = arr[index];
  }

  return result;
}

benchmark(chunk); // ~60ms

Now we’re doing everything in one loop, in the loop we just do mathematical operations and we simply change array values in different indices using direct access, so everything inside is O(1) and the loop iterates n number of times, so congratulations everyone, we made it O(n), surely that’s faster than the trashy O(n²) implementation we had.

After running the benchmark function we get back around 60ms, which is way fas… Wait! WHAT? That’s about 9 times slower!!

A confused woman thinking with numbers and algorithms floating around her head

That’s way less performant although the implementation has less time complexity, huh!

Optimizing Array Creation

Let’s try to optimize this a bit further. The current implementation creates a dynamic array that changes its size in every iteration of the loop. and that’s happening on two levels because this is a nested array and both levels of the array are dynamic.

function chunk(arr, size) {
  const numberOfChunks = Math.ceil(arr.length / size);
  const result = new Array(numberOfChunks);

  for (let index = 0; index < numberOfChunks - 1; index++) {
    result[index] = new Array(size);
  }
  result[numberOfChunks - 1] = new Array(arr.length % size || size);

  for (let index = 0; index < arr.length; index++) {
    const chunkIndex = Math.floor(index / size);
    const itemIndexInChunk = index % size;

    result[chunkIndex][itemIndexInChunk] = arr[index];
  }

  return result;
}

benchmark(chunk); // ~20ms

The only change from the previous implementation.

Now if we run the benchmark again, we get around 20ms. That’s better than the previous one, as a matter of fact it’s 3 times faster, but still not great as it’s 3 times slower than the Array.slice().

Array Preemptive Creation Impact

You might ask “Why did the dynamic arrays affect performance?”, and I would tell you “Great question”, let’s delve into that.

Creating the arrays first allows the engine to create an optimized shape that does the following.

  • Allocates the exact memory needed once
  • Uses a contiguous memory block
  • Knows it’s a fixed-size array
  • Optimizes for fast indexed access
const arr = new Array(1000); // Allocate once, done!
for (let i = 0; i < 1000; i++) arr[i] = i;

But when we create an empty array with a dynamic length that changes with the iterations causes the following.

  • The shape changes as it grows (The following happens multiple times as the array grows)
    1. Allocate new larger memory block (often 2x growth)
    2. Copy all existing elements
    3. Deallocate old memory
  • Can’t guarantee density
  • Less predictable for JIT compiler
  • Requires more checks on access
const arr = []; // Capacity: 4
arr.push(1, 2, 3, 4); // Full
arr.push(5); // Reallocate → Capacity: 8, copy 4 elements
arr.push(6, 7, 8, 9); // Full
arr.push(10); // Reallocate → Capacity: 16, copy 9 elements
// ... continues growing with reallocations

📝 Note:

If you want to learn more about how the engine optimizes for different shapes, and what are the shapes in first place, here are a couple of sources that might help.

Knowing the difference, it now makes more sense because in our previous approach the dynamic array gave the engine so much trouble that it hit performance a ton.

But still, is that the best we can get? Our O(n) can’t beat the O(n²)?

Optimizing Iteration Calculations

If you look closer, you can see that we’re doing a couple of computations in every iteration, which are Math.floor() and the modulus %. While these are usually negligible, the modulus is a little bit heavier than other mathematical computations, and if you do both of these computations n number of times, it accumulates quickly.

So, in light of this information, how about we try to not do these computations and replace it with a simple addition?

function chunk(arr, size) {
  const numberOfChunks = Math.ceil(arr.length / size);
  const result = new Array(numberOfChunks);

  for (let index = 0; index < numberOfChunks - 1; index++) {
    result[index] = new Array(size);
  }
  result[numberOfChunks - 1] = new Array(arr.length % size || size);

  let chunkIndex = 0;
  let itemIndex = 0;

  for (let index = 0; index < arr.length; index++) {
    result[chunkIndex][itemIndex] = arr[index];
    itemIndex++;
    if (itemIndex === size) {
      itemIndex = 0;
      chunkIndex++;
    }
  }

  return result;
}

benchmark(chunk); // ~16ms

Now if we run the benchmark for the third time we get around 16ms. This is better than the previous approach by around 20%, but still it’s more than 2 times slower than the Array.slice() approach.

Array.slice() Implementation

I thought that if I added a bigger dataset to try on the difference would start to show and my implementation would be faster, but on the contrary my code was even slower when the dataset got bigger and the Array.slice() simply outruns anything I tried to write.
At this point I stopped trying to optimize my code any further and started investigating why the Array.slice() is faster.

And to know why, I went and checked the implementation of the Array.slice() in the V8 engine. To follow along, you don’t need to read the implementation.
The V8 engine is written in a language called Torque (.tq). Don’t worry if you can’t read its syntax; we’ll go through what’s happening together.

You can find it on Github.

V8 Engine Implementation Dissection

In the implementation above we can see that it’s split into 4 main pieces of logic.

HandleSimpleArgumentsSlice

  1. Creates a new array.
  2. Validates the arguments structure and slice range.
  3. Bulk-copy the slice from source to result.

HandleFastAliasedSloppyArgumentsSlice

  1. Creates a new array.
  2. Processes elements in two phases:
    • Phase 1: For mapped parameters, reads values from context or unmapped storage (element-by-element)
    • Phase 2: Bulk-copies any remaining unmapped elements

This is slightly different from HandleSimpleArgumentsSlice because it must check a parameter map for aliased variables, requiring two-phase processing instead of direct bulk-copying.

HandleFastSlice

This is a dispatcher that routes to the appropriate handler based on the object type:

Slow Path (generic)

  1. Creates a new array using ArraySpeciesCreate() (respects subclassing)
  2. Iterates through each index from start to end:
    • Checks if the property exists (including prototype chain)
    • If it exists: gets the value and adds it to the result
    • If it doesn’t exist: skips it (preserves holes)
  3. Sets the result array’s length and returns it

Examples

Now that’s a bunch of non-sense, all of this talk doesn’t really make a lot of sense when you just look at it from afar, so how about we see some JavaScript code snippets and which snippet would trigger which piece of logic in the V8 engine implementation.

Strict Mode Arguments

"use strict";
function strictFunc(a, b, c) {
  console.log(Array.prototype.slice.call(arguments, 1, 3));
}
strictFunc(10, 20, 30, 40);
// Output: [20, 30]

What happens:

  • The HandleSimpleArgumentsSlice logic kicks in.
  • arguments in strict mode is just a simple array-like object.
  • Elements are stored in a regular FixedArray.
  • The implementation does a bounds check: start=1, count=2, end=3.
  • Allocates a new JSArray with space for 2 elements.
  • Uses CopyElements() to bulk-copy elements[1] and elements[2] from arguments.
  • Returns the new array [20, 30].

Why this path:

  • Strict mode = no aliasing between parameters and arguments
  • Simple direct memory copy, no special checks needed

Sloppy Mode with Aliasing

function sloppyFunc(a, b, c) {
  a = 999; // This modifies arguments[0] too!
  console.log(Array.prototype.slice.call(arguments, 0, 2));
}
sloppyFunc(10, 20, 30);
// Output: [999, 20]

What happens:

  • arguments in sloppy mode has special “magic” where parameters are aliased.
  • When you change a, it changes arguments[0] (and vice versa).
  • The HandleFastAliasedSloppyArgumentsSlice() logic kicks in.
  • The implementation uses a SloppyArgumentsElements structure with:
    • A mapped_entries array that points to context slots for aliased params.
    • An unmappedElements array for remaining elements.
  • Processing the slice:
    • For index 0: Check mapped_entries[0] → points to context slot for a → reads 999.
    • For index 1: Check mapped_entries[1] → points to context slot for b → reads 20.
    • If there were more elements beyond parameters, it would bulk-copy from unmappedElements.
  • Returns [999, 20].

Why this path:

  • Sloppy mode allows parameter aliasing (deprecated behavior but still supported)
  • Must check the parameter map for each position to get the current value
  • More complex than strict mode due to the indirection

Regular Fast Array

const arr = [1, 2, 3, 4, 5];
const sliced = arr.slice(1, 4);
console.log(sliced);
// Output: [2, 3, 4]

What happens:

  • arr is a FastJSArrayForCopy (regular JavaScript array with packed elements)
  • The HandleFastSlice logic kicks in.
  • The implementation:
    • Checks that start=1 and count=3 are Smis (small integers)
    • Re-validates that start + count doesn’t exceed array length (defense against valueOf side effects)
    • Calls ExtractFastJSArray() which directly accesses the internal FixedArray backing store
    • Bulk-copies 3 elements starting from index 1
  • Returns [2, 3, 4]

Why this path:

  • No special checks needed, just bounds validation since it’s a regular JS arrays.
  • Fastest path, direct memory access to the backing store.

Full-Array Cloning

const original = [1, 2, 3, 4, 5];
const cloned = original.slice(); // or .slice(0)
console.log(cloned);
// Output: [1, 2, 3, 4, 5]

What happens:

  • Detects that start is undefined (or 0) and end is undefined.
  • Check early, before the main HandleFastSlice dispatcher logic kicks in.
  • Takes special clone path: CloneFastJSArray().
  • This can create a Copy-On-Write (COW) array for efficiency.
  • Potentially just increases a reference count instead of copying.
  • Only copies memory when one array is modified.

Why this path:

  • Engine optimizes this scenario because it’s common to clone a full array.
  • Can avoid copying until necessary (COW optimization).

Array with Holes

const fast = [1, , , 4, 5]; // holes at indices 1 and 2
const slicedFast = fast.slice(0, 4);
console.log(slicedFast);
// Output: [1, empty × 2, 4]

const slow = [1, , , 4, 5]; // holes at indices 1 and 2
slow.__proto__ = { 2: "surprise" }; // Degrade the array structure
const slicedSlow = slow.slice(0, 4);
console.log(slicedSlow);
// Output: [1, empty, 'surprise', 4]

What happens:

This can take either the fast path or slow path depending on the array’s internal elements kind.

  • Fast path (most common):
    • V8 recognizes fast as HOLEY_ELEMENTS (a fast holey array type)
    • Goes through HandleFastSliceExtractFastJSArray
    • ExtractFastJSArray directly copies the backing store, preserving holes
    • Holes are represented as special TheHole values in memory
    • The copy operation includes these hole markers
    • Result: [1, hole, hole, 4] which displays as [1, empty × 2, 4]
  • Slow path (if structure is too complex):
    • If the array’s structure isn’t recognized as a fast holey type (e.g., slow has properties on the prototype, or the elements kind transitioned to something complex)
    • Falls back to the generic property-based approach:
      • For index 0: HasProperty(slow, 0)true, GetProperty1
      • For index 1: HasProperty(slow, 1)false, skip (hole preserved by not creating property)
      • For index 2: HasProperty(slow, 2)true (found on prototype!), GetProperty → ‘surprise’
      • For index 3: HasProperty(slow, 3)true, GetProperty4
    • Result: Properties are created for all indices that exist either on the object or its prototype chain

Why this path:

  • Arrays with holes are common in JavaScript (e.g., from Array(10) or delete arr[2])
  • V8 optimizes for this with specialized element kinds: HOLEY_SMI_ELEMENTS, HOLEY_ELEMENTS, HOLEY_DOUBLE_ELEMENTS
  • Fast holey arrays store holes as a special TheHole sentinel value, allowing bulk copy operations
  • The fast path avoids expensive property existence checks for each index
  • Only when the array structure is degraded (due to prototype modifications, non-standard properties, etc.) does it bail to the slow path

Generic Object

const obj = {
  0: "a",
  1: "b",
  2: "c",
  length: 3,
};
const sliced = Array.prototype.slice.call(obj, 1, 3);
console.log(sliced);
// Output: ['b', 'c']

What happens:

  • Not a real array or arguments object, so all fast paths Bailout.
  • Falls through to the generic slow path.
  • For each index from k=1 to final=3:
    • Calls HasProperty(obj, 1) → checks if property "1" exists → true.
    • Calls GetProperty(obj, 1) → gets the value "b".
    • Calls FastCreateDataProperty(resultArray, 0, "b") → stores in result.
    • Repeats for index 2.
  • Returns ['b', 'c'].

Why this path:

  • Object doesn’t have fast array internals.
  • Must use generic property access mechanisms.
  • Works with any object, including proxies, objects with getters, etc.

Bailout Conditions

Here’s a grouped snippets showcasing when the logic would Bailout

// Example 1: Result too large for new space
const huge = new Array(1000000);
huge.fill(1);
function f(...args) {
  // count >= kMaxNewSpaceFixedArrayElements → Bailout
  return Array.prototype.slice.call(args, 0);
}
f(...huge);

// Example 2: Modified length during valueOf
const tricky = [1, 2, 3, 4, 5];
tricky.slice(
  {
    valueOf() {
      tricky.length = 2; // Mutate during conversion!
      return 0;
    },
  },
  4,
);
// Fast path re-checks length, sees mismatch → Bailout

// Example 3: Arguments with non-standard elements
function weird() {
  arguments.__proto__ = [100, 200]; // Messing with prototype
  // Can't trust structure → Bailout
  return Array.prototype.slice.call(arguments, 0, 2);
}
weird(1, 2);

Speed Comparison

Now, even inside the Array.slice() implementation we still see difference in speed although they’re all O(n) (except for the full-array clone edge-case), but their speed differ a lot.

RankExampleTime ComplexityRelative Speed
🥇 1Array clone .slice()O(1) to O(n)Fastest
🥈 2Regular fast arrayO(n)Very Fast
🥉 3Strict argumentsO(n)Fast
4Fast holey arrayO(n)Fast
5Sloppy aliased argumentsO(n)Moderate
6Generic objectO(n)Slow
7Bailout conditionsO(n)Slow

📝 Note:

The “Relative Speed” values are approximate and depend on:

  • Array/object size.
  • V8 version and optimizations.
  • CPU cache effects.
  • Actual property lookup costs for objects.

Why Is Array.slice() Faster?

After dissecting the V8 implementation, we can now understand why Array.slice() consistently outperforms hand-written alternatives. The answer lies in multiple optimization layers that work together.

📝 Note:

While we’re focusing on V8’s implementation (used in Chrome, Node.js, and Edge), other JavaScript engines like SpiderMonkey (Firefox), JavaScriptCore (Safari), and Hermes (React Native) have similar optimizations with their own approaches. The fundamental principles—direct memory access, type specialization, and bulk operations—apply across all modern engines. We’re using V8 as our reference because it’s what we’ve examined throughout this article, but the performance characteristics and the reasons why built-ins are faster remain consistent across engines.

Direct Memory Access

Unlike JavaScript code that must go through property access APIs, V8’s built-in implementation directly accesses the internal backing store:

// Our JavaScript code (even optimized):
const result = [];
for (let i = start; i < end; i++) {
  result.push(array[i]); // Property access overhead
}

// What V8 does internally:
// Direct memcpy from backing store:
// result.elements = Copy(array.elements, start, count)

This eliminates property lookup overhead, prototype chain traversal, and bounds checking on every access.

Type Specialization

V8 maintains different code paths for different object types (FastJSArrayForCopy, JSStrictArgumentsObject, etc.). When it knows the exact type, it can:

  • Skip type checks.
  • Use optimized memory layouts.
  • Apply type-specific optimizations.

Our JavaScript code can never achieve this level of specialization because it operates at a higher abstraction level.

Bulk Operations vs. Element-by-Element

The fast paths use CopyElements() which is essentially a low-level memory copy operation (similar to C’s memcpy):

  • Fast path: Single bulk copy instruction → ~1-5 CPU cycles per element
  • JavaScript loop: Multiple operations per element → ~50-500 CPU cycles per element

Even JIT-compiled JavaScript loops can’t match raw memory copying performance.

Pre-allocated Result Arrays

V8 knows the exact size needed upfront and allocates the result array once:

// V8 allocates exactly what's needed:
const result: JSArray = AllocateJSArray(
  ElementsKind::HOLEY_ELEMENTS,
  arrayMap,
  count,
  count
);

// Our JavaScript code might cause multiple allocations as the array grows
const result = [];
for (...) {
  result.push(item); // May trigger reallocation and copying
}

Copy-On-Write (COW) Optimization

For full array cloning (.slice() with no arguments), V8 can create a COW array:

const original = [1, 2, 3, 4, 5];
const clone = original.slice(); // No actual copy yet!

The clone initially just points to the same backing store with increased reference count. Memory is only copied when one array is modified. Our JavaScript code can never implement this because it doesn’t have access to V8’s internal reference counting.

Inline Caching and Hot Path Optimization

Since Array.slice() is a built-in, V8’s JIT compiler can:

  • Inline the entire operation when called frequently
  • Generate specialized machine code for specific call sites
  • Eliminate function call overhead entirely

User-written functions face more conservative optimization assumptions.

Skip the JavaScript Engine Overhead

Every line of JavaScript code you write must be:

  1. Parsed.
  2. Compiled to bytecode.
  3. Interpreted or JIT-compiled to machine code.
  4. Subject to deoptimization if assumptions break.

Built-in operations bypass most of this pipeline and execute directly as native machine code.

Hardware-Level Optimizations

Modern CPUs have specialized instructions for memory operations. V8’s CopyElements() can leverage:

  • SIMD instructions (copy multiple elements in one instruction).
  • CPU cache prefetching.
  • Aligned memory access patterns.

JavaScript loops, even when optimized, rarely achieve this level of hardware utilization.

Is It Really O(n²)?

Even after all of the discussions previously, something still feels off. We missed something really important. I mean, it still can be faster, but that’s waaaay faster than expected.

In the first section we laser-focused on the array only and we treated the k (size of chunk), and the m (number of chunks) are unrelated and we treated them as separate, so while simplifying it we simply treated them as O(n²).

If we look at the big picture, both k and m are related, and the relation between them is division.
They’re related because the m is reliant on k and depending on how big or small it is, it will affect the m.

If we focus this line:

for (let index = 0; index < arr.length; index += size) {

You can see that we increment the index in each iteration by the size of the chunk (k), which means the number of iterations this loop will make will be n /k, where n is the total length of the array.
But wait, doesn’t this loop iteration count equals the number of chunks?
This means that if this parent loop iteration count is m, which is equal to n / k, and the child loop is still as is and its iteration count is k.
And since the total notation is O(k × m), if we substitute that with the new values then it’s O(k × (n / k)).

Oh! O(k × (n / k)) is actually equal to O(n). This means this piece of logic will increase linearly with the size of the inputted array.

This makes more sense now. We’re not only doing this way faster just because it’s optimized code, or that code runs natively, or it does bulk copy only, but it’s also running a number of iterations that’s equal to the other approaches, and since the Array.slice() is the fastest implementation, and they all run the same amount of iterations, it obviously was the fastest of all the options we had.

The over-simplification of the “Big O Notation”, hurts your visibility a lot whenever you’re comparing the performance of different pieces of logic blindly without any benchmarking.

Conclusion

As we saw in both our implementations and the V8 engine pieces of logic in the slice implementation, almost all of the pieces of logic are O(n), but still they have very different benchmarks.

📝 Note:

If you’re curious about how I benchmarked my chunk method, here’s the implementation of the benchmark() function that passes the first argument as an array with 10 million items and the chunk size to be 360 and does 300 passes of the same logic and stores the timing for each pass then returns the average.

function benchmark(cb) {
  const list = new Array(10_000_000).fill(0);
  const timings = [];
  console.log("Calculating average...");

  // Do 300 passes of the callback function
  for (let iteration = 0; iteration < 300; iteration++) {
    performance.mark("benchmark-start");
    cb(list, 360);
    // Store the timing of each iteration
    timings.push(
      performance.measure("benchmark-end", "benchmark-start").duration,
    );
  }

  // return the average of the 300 passes timings
  return (
    timings.reduce((partialSum, a) => partialSum + a, 0) / timings.length
  );
}

Why Is The Big O Notation Not Enough

That’s because the “Big O Notation”, is an over-simplification of real-life scenarios. If you made a piece of logic that’s O(n), it doesn’t necessarily mean it’s faster than the other O(n²) approach that you had previously. This is because the n number of operations isn’t the only metric here, the speed each operation over n is a big factor.

But also, the over-simplification can hurt your visibility a ton if you don’t know what you’re doing mathematically, like we highlighted in the previous section.

Example

Here’s a rough example with imaginary numbers to showcase this issue.

  • If your O(n²) logic uses fast operations and it takes ~1 nanoseconds per operation.
    • This means that at 1 million base operations, in the O(n²) that will be 1,000,000,000,000 operations (Yup, A LOT)
    • How long would it take? 1,000,000,000,000 × 1 nanoseconds. That’s 1000 seconds.
  • If your O(n) logic uses slow operations and it takes ~1 milliseconds per operation.
    • This means that at 1 million base operations, in the O(n) that will be 1,000,000 operations (Way less than the previous)
    • How long would it take? 1,000,000 × 1 milliseconds. That’s 1000 seconds.

As we can see it’s the same time for both overall even with their time complexities are vastly different.

Graph Comparison

Usually whenever you check a graph online for the “Big O Notation”, you will usually see a line chart that compares the different over-simplified notations and it’s graphed against the input size and the number of operations.

Line chart comparing O(n) and O(n²) in relation to number of operations and input size. — See caption below
Line chart comparing O(n) and O(n²) algorithms, showing O(n) performing better as input increases.

This is accurate and a correct representation, but it doesn’t tell the whole story.
The number of operations is one part of it, but a big part of it is how fast is each operation.

To account for that you can see it more accurately represented if you star to plot it against time instead of the number of operations, and when you do that you should see something similar to the following (exaggerated) graph.

Line chart comparing O(n′), O(n′′), and O(n²) in relation to time and input size. — See caption below
Line chart comparing O(n′), O(n′′), and O(n²) algorithms in relation to time and input size.

As you can see, it tells a totally different story, yes the O(n′′) is a linear, and yes it does in fact handle more input in a better way as it doesn’t scale as fast as the O(n²), but in this sample size where we drew the graph, you’ll see that the O(n²) is actually faster because the n operations here are way faster than the n′′ operations, so when you square the n, although they might doe more operations, but the operations are faster and in the end you get a fast early result.

If you zoom out in that graph, you’ll eventually see that the O(n′′) will do better than O(n²), but the intersection point in the graph at which the O(n′′) will start winning it will be at an unrealistic input size for the O(n²), which doesn’t really matter in your real-life scenarios.

📝 Note:

This graph is not built with real data, it’s a very much exaggerated graph to showcase how plotting it against time will tell a different story.

Key Takeaways

  1. Don’t over-simplify the “Big O Notation” without knowing what you’re doing mathematically. Over-simplification translates to inaccuracy.
  2. Don’t rely solely on the “Big O Notation” of your logic to give you an accurate representation of how performant your code is.
  3. You should (almost always) use built-in functions. They’re most likely faster than your implementation because of all of the reasons mentioned above.
  4. You should benchmark your code whenever you can to get a more accurate representation of how performant your code actually is in run-time.
  5. You should always use strict mode in JavaScript because sloppy (aka. non-strict) mode has a lot of uncertainties so it’s less optimized and your code is then slower.
  6. If you’re working with big arrays and you know how big the array will be ahead of time, you should instantiate this array with the size instead of creating an array with length 0 and start pushing items to it in a loop (be careful that this makes the array holey so make sure you benchmark your code to see if that gives you performance gains over the other approach).
A potato with text beneath it saying 'Sorry for the long post here's a potato'

Share this blogpost