--forcepushed--fp
  • Home
  • Articles
  • Resources
  • Projects

Build smarter, ship faster, and stand out from the crowd.

Subscribe or follow on X for updates when new posts go live.

Follow on X

10 Practical Ways To Reduce Memory Usage in Production Next.js Apps

10 Practical Ways To Reduce Memory Usage in Production Next.js Apps

Next.js is a sophisticated framework with a large server-side runtime, a React server component pipeline, a flexible routing layer, and a significant amount of build-time and run-time code. All of this power has a cost: memory usage. A single production Next.js process commonly consumes 150 MB to 300 MB of resident memory with no traffic. Under load, or with large caches, it can easily spike well above that.

Teams who deploy several Next.js applications to the same DigitalOcean droplet often discover this reality the hard way. Running four small apps on a single 2 GB instance can lead to memory pressure, system-level swapping, stalled requests, and unexpected restarts. The framework is not lightweight by default, especially when used with server components and standalone builds.

Fortunately, there are several proven techniques to reduce baseline and peak memory usage. Below are ten practical approaches, each backed by examples, explanations, and actionable code snippets.


1. Remove Request Scoped Data From Module Scope

Node.js will retain any module-scoped or global references indefinitely. If request-specific data is placed at module scope, the memory footprint grows continuously.

Anti-pattern

const store = []  // never cleared

export async function POST(request) {
  const body = await request.json()
  store.push(body) // grows forever
  return Response.json({ ok: true })
}

Improved pattern

Keep request data inside the handler function:

export async function POST(request) {
  const body = await request.json()
  const processed = await handleData(body)
  return Response.json({ result: processed })
}

2. Replace Ad Hoc Caches With Bounded LRU Caches

Unbounded caches behave like slow memory leaks.

import LRU from 'lru-cache'

const cache = new LRU({
  max: 500,
  ttl: 1000 * 60,
})

export async function GET() {
  const key = 'expensive'
  const cached = cache.get(key)

  if (cached) return Response.json({ cached: true, data: cached })

  const data = await fetchExpensiveResult()
  cache.set(key, data)

  return Response.json({ cached: false, data })
}

3. Stream and Paginate Instead of Allocating Large Objects

Large dataset allocations spike memory. Streaming avoids building large in-memory objects.

Streaming pattern

export async function GET() {
  const stream = new ReadableStream({
    async start(controller) {
      const enc = new TextEncoder()
      controller.enqueue(enc.encode('['))
      let first = true

      for await (const row of db.getCursor()) {
        if (!first) controller.enqueue(enc.encode(','))
        controller.enqueue(enc.encode(JSON.stringify(row)))
        first = false
      }

      controller.enqueue(enc.encode(']'))
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'application/json' },
  })
}

4. Reduce Process Count and Avoid Clustering

Every Next.js process includes its own Node.js heap, React server component runtime, and worker pool. Running multiple instances multiplies memory usage linearly. On small droplets, two or three standalone Next.js processes may already exceed safe memory limits.

PM2 example (avoid cluster mode)

module.exports = {
  apps: [
    {
      name: 'next',
      script: '.next/standalone/server.js',
      instances: 1,           // do not spawn multiple workers
      exec_mode: 'fork',
      max_memory_restart: '350M',
    },
  ],
}

systemd-tailored example (preferred for standalone)

Systemd does not spawn multiple processes by default, which is ideal for memory control. The key is to avoid multiple services unless absolutely necessary, and to explicitly cap memory usage.

# /etc/systemd/system/next-app.service

[Unit]
Description=Next.js standalone app
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/www/next-app

ExecStart=/usr/local/bin/node .next/standalone/server.js

# Force a single process and constrain memory usage
MemoryMax=700M
MemoryHigh=650M

# Optional heap constraint
Environment=NODE_OPTIONS=--max-old-space-size=512

Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Why this helps

  • Only one Next.js server process runs.
  • MemoryMax prevents runaway memory usage.
  • MemoryHigh encourages early reclaiming before the hard ceiling.
  • No clustering, no internal worker forks, no duplication of heap.

On a production droplet, systemd with explicit memory controls offers the most predictable and stable memory footprint for standalone Next.js deployments.


5. Constrain Node.js Heap With NODE_OPTIONS

Node may reserve a large heap by default. Setting a limit prevents excessive allocation.

Environment=NODE_OPTIONS=--max-old-space-size=512

This forces the process to operate within predictable memory boundaries.


6. Lazy Load Heavy Dependencies

Avoid importing heavy libraries globally if they are not needed on every request.

Anti-pattern

import largeLib from 'heavy-library'

export async function GET() {
  return largeLib.process()
}

Improved pattern

export async function GET() {
  const largeLib = await import('heavy-library')
  return largeLib.process()
}

Lazy imports reduce baseline memory usage.


7. Use Server Component Caching Carefully

The cache() helper and fetch caching can accidentally store large objects globally.

Anti-pattern

const getUserData = cache(async (id) => {
  return db.userData.findMany({ where: { id } })
})

This accumulates many large arrays.

Safer usage

const getCountries = cache(async () => {
  return db.countries.findMany()
})

Disable caching for large responses:

await fetch(apiUrl, { cache: 'no-store' })

8. Move Heavy CPU or Memory Tasks Out of the App

Next.js is not meant to perform CPU-intensive work inside request handlers.

Anti-pattern

export async function POST(req) {
  const body = await req.json()
  const pdf = await generatePdf(body) // heavy
  return new Response(pdf)
}

Offloading pattern

await queue.add('pdf-job', { payload })
return Response.json({ queued: true })

Background workers keep memory usage stable.


9. Remove Dev-Only Overhead and Reduce Logging

Verbose logging and debug tooling increase memory usage in production.

Anti-pattern

console.log(JSON.stringify(hugeObject))

Better practice

console.log({ id: hugeObject.id, type: hugeObject.type })

Disable source maps and dev utilities in production builds.


10. Profile the App and Fix Hot Spots

Tracking memory usage at runtime reveals real issues.

Example diagnostic endpoint

export async function GET() {
  const mem = process.memoryUsage()
  return Response.json({
    rssMB: Math.round(mem.rss / 1024 / 1024),
    heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
    externalMB: Math.round(mem.external / 1024 / 1024),
  })
}

Look for memory that grows over time without resetting.


Summary

Next.js is a powerful framework, but it is not lightweight. When deployed in standalone mode, each instance has a substantial memory footprint, and running multiple Next.js processes on a single droplet consumes more memory than expected. The techniques outlined above can meaningfully reduce memory usage and make Next.js more efficient on small Linux hosts.

By limiting process count, bounding caches, streaming data, constraining the Node heap, lazily loading dependencies, offloading heavy work, and profiling real memory usage, development teams can achieve stable performance and predictable resource usage even on small DigitalOcean droplets.

These ten practices allow Next.js applications to operate more efficiently and help teams avoid out-of-memory conditions, unexpected restarts, and degraded performance during high traffic.