Subscribe or follow on X for updates when new posts go live.
Next.js is a sophisticated framework with a large server-side runtime, a React server component pipeline, a flexible routing layer, and a significant amount of build-time and run-time code. All of this power has a cost: memory usage. A single production Next.js process commonly consumes 150 MB to 300 MB of resident memory with no traffic. Under load, or with large caches, it can easily spike well above that.
Teams who deploy several Next.js applications to the same DigitalOcean droplet often discover this reality the hard way. Running four small apps on a single 2 GB instance can lead to memory pressure, system-level swapping, stalled requests, and unexpected restarts. The framework is not lightweight by default, especially when used with server components and standalone builds.
Fortunately, there are several proven techniques to reduce baseline and peak memory usage. Below are ten practical approaches, each backed by examples, explanations, and actionable code snippets.
Node.js will retain any module-scoped or global references indefinitely. If request-specific data is placed at module scope, the memory footprint grows continuously.
const store = [] // never cleared
export async function POST(request) {
const body = await request.json()
store.push(body) // grows forever
return Response.json({ ok: true })
}
Keep request data inside the handler function:
export async function POST(request) {
const body = await request.json()
const processed = await handleData(body)
return Response.json({ result: processed })
}
Unbounded caches behave like slow memory leaks.
import LRU from 'lru-cache'
const cache = new LRU({
max: 500,
ttl: 1000 * 60,
})
export async function GET() {
const key = 'expensive'
const cached = cache.get(key)
if (cached) return Response.json({ cached: true, data: cached })
const data = await fetchExpensiveResult()
cache.set(key, data)
return Response.json({ cached: false, data })
}
Large dataset allocations spike memory. Streaming avoids building large in-memory objects.
export async function GET() {
const stream = new ReadableStream({
async start(controller) {
const enc = new TextEncoder()
controller.enqueue(enc.encode('['))
let first = true
for await (const row of db.getCursor()) {
if (!first) controller.enqueue(enc.encode(','))
controller.enqueue(enc.encode(JSON.stringify(row)))
first = false
}
controller.enqueue(enc.encode(']'))
controller.close()
},
})
return new Response(stream, {
headers: { 'Content-Type': 'application/json' },
})
}
Every Next.js process includes its own Node.js heap, React server component runtime, and worker pool. Running multiple instances multiplies memory usage linearly. On small droplets, two or three standalone Next.js processes may already exceed safe memory limits.
module.exports = {
apps: [
{
name: 'next',
script: '.next/standalone/server.js',
instances: 1, // do not spawn multiple workers
exec_mode: 'fork',
max_memory_restart: '350M',
},
],
}
Systemd does not spawn multiple processes by default, which is ideal for memory control. The key is to avoid multiple services unless absolutely necessary, and to explicitly cap memory usage.
# /etc/systemd/system/next-app.service [Unit] Description=Next.js standalone app After=network.target [Service] Type=simple WorkingDirectory=/var/www/next-app ExecStart=/usr/local/bin/node .next/standalone/server.js # Force a single process and constrain memory usage MemoryMax=700M MemoryHigh=650M # Optional heap constraint Environment=NODE_OPTIONS=--max-old-space-size=512 Restart=always RestartSec=5 [Install] WantedBy=multi-user.target
On a production droplet, systemd with explicit memory controls offers the most predictable and stable memory footprint for standalone Next.js deployments.
Node may reserve a large heap by default. Setting a limit prevents excessive allocation.
Environment=NODE_OPTIONS=--max-old-space-size=512
This forces the process to operate within predictable memory boundaries.
Avoid importing heavy libraries globally if they are not needed on every request.
import largeLib from 'heavy-library'
export async function GET() {
return largeLib.process()
}
export async function GET() {
const largeLib = await import('heavy-library')
return largeLib.process()
}
Lazy imports reduce baseline memory usage.
The cache() helper and fetch caching can accidentally store large objects globally.
const getUserData = cache(async (id) => {
return db.userData.findMany({ where: { id } })
})
This accumulates many large arrays.
const getCountries = cache(async () => {
return db.countries.findMany()
})
Disable caching for large responses:
await fetch(apiUrl, { cache: 'no-store' })
Next.js is not meant to perform CPU-intensive work inside request handlers.
export async function POST(req) {
const body = await req.json()
const pdf = await generatePdf(body) // heavy
return new Response(pdf)
}
await queue.add('pdf-job', { payload })
return Response.json({ queued: true })
Background workers keep memory usage stable.
Verbose logging and debug tooling increase memory usage in production.
console.log(JSON.stringify(hugeObject))
console.log({ id: hugeObject.id, type: hugeObject.type })
Disable source maps and dev utilities in production builds.
Tracking memory usage at runtime reveals real issues.
export async function GET() {
const mem = process.memoryUsage()
return Response.json({
rssMB: Math.round(mem.rss / 1024 / 1024),
heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
externalMB: Math.round(mem.external / 1024 / 1024),
})
}
Look for memory that grows over time without resetting.
Next.js is a powerful framework, but it is not lightweight. When deployed in standalone mode, each instance has a substantial memory footprint, and running multiple Next.js processes on a single droplet consumes more memory than expected. The techniques outlined above can meaningfully reduce memory usage and make Next.js more efficient on small Linux hosts.
By limiting process count, bounding caches, streaming data, constraining the Node heap, lazily loading dependencies, offloading heavy work, and profiling real memory usage, development teams can achieve stable performance and predictable resource usage even on small DigitalOcean droplets.
These ten practices allow Next.js applications to operate more efficiently and help teams avoid out-of-memory conditions, unexpected restarts, and degraded performance during high traffic.