| Method | Deletes | Returns | Use When |
|---|---|---|---|
deleteOne() | First matching document | { acknowledged, deletedCount } | Remove one record by filter |
deleteMany() | All matching documents | { acknowledged, deletedCount } | Batch removal by condition |
findOneAndDelete() | First matching document | The deleted document (or null) | Need the document data after deleting |
drop() | Entire collection | true/false | Wipe all docs + indexes + metadata |
dropDatabase() | Entire database | ok: 1 | Full cleanup — use with extreme caution |
// Signature
db.collection.deleteOne(filter, options)
// Delete by _id (most precise — always matches exactly one)
db.users.deleteOne({ _id: ObjectId("507f1f77bcf86cd799439011") })
// → { acknowledged: true, deletedCount: 1 }
// Delete by field value (removes first match — natural order)
db.users.deleteOne({ email: "spam@example.com" })
// → { acknowledged: true, deletedCount: 1 }
// No match — no error, just deletedCount: 0
db.users.deleteOne({ _id: ObjectId("000000000000000000000000") })
// → { acknowledged: true, deletedCount: 0 }
deleteOne() Returns
| Field | Meaning |
|---|---|
acknowledged | true if write concern was satisfied |
deletedCount | 0 (no match) or 1 (deleted). Never >1 for deleteOne() |
_id when possible — it uses the primary index, is always unique, and guarantees exactly one document is deleted.// Signature
db.collection.deleteMany(filter, options)
// Delete all expired sessions
db.sessions.deleteMany({ expiresAt: { $lt: new Date() } })
// → { acknowledged: true, deletedCount: 47 }
// Delete all documents in a collection (NOT the same as drop!)
db.logs.deleteMany({})
// → Removes all docs but keeps collection + indexes + metadata intact
// Delete by type mismatch (fix bad data)
db.products.deleteMany({ price: { $type: "string" } })
deleteMany() vs drop()
| Aspect | deleteMany({}) | drop() |
|---|---|---|
| What's removed | All documents only | Documents + indexes + metadata + collection itself |
| Speed | Slower — removes docs one by one, updates indexes | Much faster — single filesystem operation |
| After operation | Empty collection with indexes still there | Collection no longer exists |
| Re-insert | Ready immediately — indexes are already in place | Must recreate collection and all indexes |
| Use when | Keeping schema/indexes; partial deletion common | Full wipe needed; indexes will change anyway |
deleteMany({}) with an empty filter deletes every document in the collection. Always double-check your filter in production. Consider testing with find({}) first to confirm what will be deleted.Atomically finds, deletes, and returns the deleted document. Unlike deleteOne(), you get the document's data back — essential for job queues or audit logging.
// Signature
db.collection.findOneAndDelete(filter, options)
// options: { sort, projection, maxTimeMS, collation, hint }
// Basic — returns the deleted document (pre-deletion state)
const deleted = db.users.findOneAndDelete({ email: "bob@example.com" })
if (deleted) {
print(`Deleted user: ${deleted.name}`) // still have the data!
}
// No match → returns null (not an error)
// Priority job queue — pop highest-priority oldest job atomically
const job = db.jobs.findOneAndDelete(
{ status: "ready" },
{ sort: { priority: -1, createdAt: 1 } } // highest prio, oldest first
)
// Worker gets job object AND it's removed — no duplicate processing
// With projection — only return specific fields from deleted doc
db.sessions.findOneAndDelete(
{ expired: true },
{ projection: { sessionToken: 1, userId: 1, _id: 0 } }
)
// Returns only sessionToken and userId from the deleted session
findOneAndDelete() vs deleteOne()
| Aspect | deleteOne() | findOneAndDelete() |
|---|---|---|
| Returns | { acknowledged, deletedCount } | The deleted document (or null) |
| You get data back | No | Yes — before deletion |
| Performance | Faster (lighter) | Slightly slower (must return doc) |
| Use when | Don't need deleted content | Need to act on deleted content |
| Sort option | No (before MongoDB 8.0) | Yes — controls which doc is deleted |
if (result) before accessing properties. findOneAndDelete() returns null on no match — accessing result.name on null throws "Cannot read property of null".// drop() — removes entire collection (docs + indexes + metadata)
db.tempLogs.drop() // → true (success) or false (collection didn't exist)
// dropDatabase() — removes the ENTIRE current database
use myDatabase
db.dropDatabase() // → { ok: 1 } — all collections in myDatabase are gone
dropDatabase() on admin or local — this corrupts replica set configuration and loses all user credentials. Reserved databases: admin, local, config.Is drop() Atomic?
Yes — drop() is atomic. MongoDB takes an exclusive write lock on the collection, so no other reads or writes can interleave. The entire operation (documents + indexes + metadata) is a single entry in the oplog — no other client ever sees a "half-dropped" collection.
| drop() side effect | Details |
|---|---|
| Indexes deleted | All indexes are removed permanently — unlike deleteMany() which keeps them |
| Change Streams invalidated | Open Change Streams receive an invalidate event and are closed |
| Aborts index builds | MongoDB 4.4+ automatically aborts any in-progress index builds |
| Not transaction-safe | You cannot run drop() inside a multi-document transaction |
TTL Index — Automatic Document Expiry
Instead of manually running deleteMany() for cleanup, use a TTL (Time To Live) index to auto-expire documents.
// Create TTL index: expire sessions 1 hour after createdAt
db.sessions.createIndex(
{ createdAt: 1 },
{ expireAfterSeconds: 3600 } // 3600s = 1 hour
)
// MongoDB deletes expired documents automatically every ~60 seconds
// The field must be a BSON Date type — string dates won't work!
// TTL index use cases:
// - Session stores (expire after inactivity)
// - Password reset tokens (expire after 15 mins)
// - Cache documents (auto-evict stale data)
// - Log rotation (keep only last 30 days)
// Check existing TTL indexes
db.sessions.getIndexes()
// Look for: { "expireAfterSeconds": 3600 } in the index definition
No Referential Integrity — Orphaned References
// Deleting a user does NOT delete their orders
db.users.deleteOne({ _id: ObjectId("user1") })
// Orders collection still has references to deleted user:
// { _id: "ord1", userId: ObjectId("user1"), total: 999 } ← orphaned!
// MongoDB won't warn you. Application must handle cascading deletes:
function deleteUserAndOrders(userId) {
db.orders.deleteMany({ userId: ObjectId(userId) }) // delete orders first
db.users.deleteOne({ _id: ObjectId(userId) }) // then delete user
// OR: use a multi-document transaction for atomicity
}
CASCADE DELETE STRATEGIES
Since MongoDB has no native ON DELETE CASCADE, you must implement cascade logic yourself. Choose a strategy based on your consistency and scale requirements:
| Strategy | Atomicity | Latency | Best For | Consistency |
|---|---|---|---|---|
| Transactions | Full (All or Nothing) | Higher | Small-medium datasets, critical integrity | Strong |
| Mongoose Hooks | None (unless wrapped) | Medium | ODM-driven apps | Strong / Manual |
| Atlas Triggers | None | Low (Async) | Serverless / zero-ops | Eventual |
| Queue / Worker | None | Lowest (Async) | Millions of child records | Eventual |
Strategy 1 — Multi-Document Transaction (Atomic & Synchronous)
The closest equivalent to SQL's CASCADE DELETE. Everything is deleted or nothing is.
// Atomically delete user + all their orders
const session = db.getMongo().startSession()
session.startTransaction()
try {
db.getSiblingDB("mydb").orders.deleteMany(
{ userId: ObjectId("user1") },
{ session }
)
db.getSiblingDB("mydb").users.deleteOne(
{ _id: ObjectId("user1") },
{ session }
)
session.commitTransaction()
// Both deletes succeed or both are rolled back
} catch (e) {
session.abortTransaction()
} finally {
session.endSession()
}
Strategy 2 — Application Middleware (Mongoose pre-hook)
Define a pre hook on your schema to auto-delete children when a parent is removed.
// Mongoose schema pre-hook — runs before findOneAndDelete()
UserSchema.pre('findOneAndDelete', async function() {
const userId = this.getFilter()._id
await Order.deleteMany({ userId })
// WARNING: only fires through Mongoose — shell deletes bypass this!
})
db.users.deleteOne()) completely bypass hooks — orphans will be created silently.Strategy 3 — Background Worker / Queue (High Scale)
For massive datasets, decouple the cascade. Push the deleted parent ID to a queue (Redis, RabbitMQ) and let a worker batch-delete children asynchronously.
// Step 1: delete the parent, push ID to queue
db.users.deleteOne({ _id: ObjectId("user1") })
queue.push({ action: "cascade_delete_orders", userId: "user1" })
// Step 2: background worker processes the queue
// worker.js
while (true) {
const job = queue.pop()
if (job.action === "cascade_delete_orders") {
db.orders.deleteMany({ userId: ObjectId(job.userId) })
}
}
// Best for: single parent with millions of child records
// Tradeoff: eventual consistency — orphans exist briefly between steps
null Filter Matches More Than Expected
// { status: null } matches BOTH explicitly null AND missing status field
db.orders.deleteMany({ status: null })
// Deletes orders where status is null AND orders that have no status field!
// To delete ONLY docs where status is explicitly null:
db.orders.deleteMany({ status: { $eq: null, $exists: true } })
// To delete ONLY docs where status field is missing:
db.orders.deleteMany({ status: { $exists: false } })
Type Sensitivity — Won't Match Wrong Type
// If price stored as String "free" and you filter with number:
db.products.deleteMany({ price: { $lt: 0 } })
// Won't delete string "free" — types don't match across comparison operators
// Find and delete type mismatches explicitly:
db.products.deleteMany({ price: { $type: "string" } })
deleteMany() is Not Atomic — No Rollback
// If deleteMany() fails midway, partial deletes are permanent
// Documents deleted before the error are GONE — no automatic rollback
// For all-or-nothing mass deletion, use a transaction:
const session = db.getMongo().startSession()
session.startTransaction()
try {
db.getSiblingDB("mydb").orders.deleteMany(
{ status: "cancelled" },
{ session }
)
db.getSiblingDB("mydb").audit.insertOne(
{ action: "purge_cancelled", timestamp: new Date() },
{ session }
)
session.commitTransaction()
} catch (e) {
session.abortTransaction() // rolls back everything
} finally {
session.endSession()
}
Soft Delete Pattern — Don't Delete, Mark Instead
// Hard delete — permanent, no undo
db.users.deleteOne({ _id: userId })
// Soft delete — keep data, mark as deleted (recoverable)
db.users.updateOne(
{ _id: userId },
{
$set: { deletedAt: new Date(), isDeleted: true },
$unset: { sessionToken: "" } // invalidate session
}
)
// Active user queries always include: { isDeleted: { $ne: true } }
// Benefits: audit trail, recovery possible, foreign keys remain valid
isDeleted: true) for user-facing data, and reserve hard deletes for truly temporary or expired data.