A read_file tool with no size limit will attempt to load a 5GB log file into Node.js heap, causing an out-of-memory crash that takes down every tool in the server simultaneously. A database query tool with no LIMIT clause can return millions of rows, exhausting memory and making the response unparseable. CWE-400 (uncontrolled resource consumption) and CWE-770 (allocation of resources without limits) are the mechanisms. These are not theoretical edge cases — agentic AI workflows routinely hit large files and broad queries when exploring unfamiliar codebases or databases.
Medium because resource exhaustion from oversized responses crashes the server process, but it requires the AI or user to trigger an operation against genuinely large data sources.
Add explicit size checks before loading data and document the limits in the tool description.
// src/tools/read-file.ts — bounded file read
const MAX_FILE_SIZE = 10 * 1024 * 1024 // 10 MB
server.tool('read_file', 'Read a file (max 10 MB). For larger files, use read_file_range.', ...,
async ({ path }) => {
const stat = await fs.stat(path)
if (stat.size > MAX_FILE_SIZE) {
return {
content: [{ type: 'text', text:
`File is ${(stat.size / 1024 / 1024).toFixed(1)} MB — exceeds 10 MB limit. ` +
`Use read_file_range to read specific byte ranges.`
}],
isError: true,
}
}
return { content: [{ type: 'text', text: await fs.readFile(path, 'utf-8') }] }
}
)
ID: mcp-server.security-capabilities.resource-limits
Severity: medium
What to look for: Count all operations that could consume unbounded resources (large file reads, database queries without LIMIT, recursive operations). Enumerate which have size/depth limits. Identify tools that could consume excessive resources: file reads (reading a 10GB file into memory), database queries (unbounded SELECT), HTTP requests (downloading large files), search operations (matching millions of files), command execution (CPU-intensive processes). Check that these operations have limits: maximum file size, query row limits, download size limits, search result caps, command execution timeouts. Check that the limits are documented in tool descriptions.
Pass criteria: Tools that access potentially large data sources have explicit limits (max file size, max rows, max results). Limits are reasonable defaults and are documented in tool descriptions. Exceeding a limit returns an informative error, not a crash. At least 90% of operations with unbounded potential must have explicit limits (max file size, query LIMIT, recursion depth).
Fail criteria: File read tools have no size limit (will OOM on large files), query tools have no row limit (will return millions of rows), or search tools have no result cap.
Skip (N/A) when: All tools are bounded by nature (e.g., they only return computed values, not data sets). All checks skip when no MCP server is detected.
Cross-reference: For timeout handling, see timeout-handling.
Detail on fail: "Tool 'read_file' reads entire file into memory with no size check — a 5GB log file will crash the server with OOM" or "Tool 'search' returns all matches with no limit — searching for common patterns could return millions of results"
Remediation: Add limits to potentially expensive operations:
// src/tools/read-file.ts — bounded resource access
const MAX_FILE_SIZE = 10 * 1024 * 1024 // 10MB limit
const stats = await fs.stat(filePath)
if (stats.size > MAX_FILE_SIZE) throw new Error("File exceeds 10MB limit")
const MAX_FILE_SIZE = 10 * 1024 * 1024 // 10MB
const MAX_SEARCH_RESULTS = 1000
server.tool('read_file', 'Read a file (max 10MB)', ..., async ({ path }) => {
const stat = await fs.stat(path)
if (stat.size > MAX_FILE_SIZE) {
return {
content: [{ type: 'text', text: `File is ${(stat.size / 1024 / 1024).toFixed(1)}MB — exceeds 10MB limit. Use read_file_range for large files.` }],
isError: true,
}
}
const content = await fs.readFile(path, 'utf-8')
return { content: [{ type: 'text', text: content }] }
})