Sometimes, we need to physically erase data blocks on disks. In the era of hard disk drives (HDDs), this was relatively straightforward: you could overwrite the entire disk with zeros or random bytes multiple times. To securely delete a specific file, tools like shred would instruct the operating system (OS) to overwrite the same logical block address (LBA) repeatedly. Since LBAs on HDDs typically had a fixed mapping to physical block addresses (PBAs), overwriting the data was an effective way to ensure it was unrecoverable.
However, this approach doesn't work with solid-state drives (SSDs). SSDs use NAND flash memory, which has limited Program/Erase (P/E) cycles. Each erase operation slightly wears out the memory cells, and excessive erasing can render blocks unusable. To mitigate this, SSD manufacturers implement wear-leveling algorithms that distribute write and erase operations across the device. A common approach is log-based writing, which always writes to fresh blocks instead of reusing old ones. Additionally, SSDs include overprovisioned blocks for wear leveling and garbage collection, making it impossible to maintain a persistent LBA-to-PBA mapping. As a result, securely deleting data by overwriting files at the OS level is unreliable on SSDs.
Moreover, for those seeking to benchmark SSD performance accurately, it's essential to restore the drive to its factory state. Over time, SSDs accumulate residual data and wear, which can skew benchmark results: You don't want your SSDs to do garbage collection on the fly when benchmark is in progress. By sanitizing the SSD, you can reset it to a "fresh out of box" (FOB) condition, ensuring that performance tests reflect the drive's true capabilities without interference from prior usage patterns.
Fortunately, if you're using an NVMe SSD, there's a way to ensure complete data erasure and restore factory conditions. The NVMe specification includes sanitize commands, which allow you to instruct the drive to perform a low-level block erase at the device level. These commands can effectively remove all user data, including residual data in caches and overprovisioned areas. In the next section, I’ll show you how to use these commands on a Linux system.
Experiment
We use Ubuntu 20.04 LTS as our experimental environment. First, install nvme-cli
if it’s not already present on your system:
sudo apt install nvme-cli
Next, list all NVMe SSDs connected to the machine:
sudo nvme list
Output:
Node SN Model Namespace Usage Format FW Rev
---------------- ------------ ------------ --------- -------------------------- ---------------- --------
/dev/nvme0n1 [REDACTED] [REDACTED] 1 0.00 B / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme1n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme2n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme3n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme4n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme5n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme6n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
/dev/nvme7n1 [REDACTED] [REDACTED] 1 67.11 MB / 960.20 GB 512 B + 0 B [REDACTED]
Select the target SSD for sanitization. Before proceeding, check whether it supports the sanitize operation:
sudo nvme id-ctrl -H /dev/nvme1n1 | grep -i "sanitize"
Output:
[31:30] : 0 Additional media modification after sanitize operation completes successfully is not defined
[29:29] : 0 No-Deallocate After Sanitize bit in Sanitize command Supported
[2:2] : 0 Overwrite Sanitize Operation Not Supported
[1:1] : 0x1 Block Erase Sanitize Operation Supported
[0:0] : 0x1 Crypto Erase Sanitize Operation Supported
As seen above, this SSD supports both block erase and crypto erase. We'll use block erase, as it resets all user data areas to the factory-erased state. By contrast, crypto erase simply deletes the encryption key, making the existing data inaccessible but not physically erased.
Before executing the command, let’s inspect the sanitize log to check the current status. The -H
flag makes the output human-readable:
sudo nvme sanitize-log -H /dev/nvme1n1
Output:
Sanitize Progress (SPROG) : 65535
Sanitize Status (SSTAT) : 0x1
[2:0] Most Recent Sanitize Command Completed Successfully.
[7:3] Number of completed passes if most recent operation was overwrite: 0
[8] Global Data Erased cleared: a NS LB in the NVM subsystem has been written to or a PMR in the NVM subsystem has been enabled
Sanitize Command Dword 10 Information (SCDW10) : 0x2
Estimated Time For Overwrite : 0xffffffff (No time period reported)
Estimated Time For Block Erase : 0xffffffff (No time period reported)
Estimated Time For Crypto Erase : 0xffffffff (No time period reported)
Estimated Time For Overwrite (No-Deallocate) : 0
Estimated Time For Block Erase (No-Deallocate) : 0
Estimated Time For Crypto Erase (No-Deallocate): 0
From this, we can see that no sanitize operation is currently in progress (SPROG = 65535
) and the last sanitize was successful (SSTAT[2:0] = 001b
). This aligns with NVMe Base specification.
Sanitize Progress (SPROG): This field indicates the fraction complete of the:
• sanitize processing state (i.e., the Restricted Processing state or the Unrestricted Processing
state); or
• Post-Verification Deallocation state, if the Post-Verification Deallocation state is entered as
part of the sanitize operation.
The value is the numerator of the fraction complete that has 65,536 (10000h) as its denominator. This
value shall be set to FFFFh if the Sanitize Operation Status (SOS) field is set to a value other than 010b
(i.e., Sanitizing) or if the sanitization target is in the Media Verification state...Now, let’s issue the block erase sanitize command (
-a 2
):sudo nvme sanitize -a 2 /dev/nvme1n1
This operation is asynchronous, so the command returns immediately. To monitor progress, re-check the sanitize log:
sudo nvme sanitize-log -H /dev/nvme1n1
If we run it multiple times, we will see
SPROG
increases gradually.Sanitize Progress (SPROG) : 18952 (28.918457%) Sanitize Status (SSTAT) : 0x2 [2:0] Sanitize in Progress. [7:3] Number of completed passes if most recent operation was overwrite: 0 [8] Global Data Erased cleared: a NS LB in the NVM subsystem has been written to or a PMR in the NVM subsystem has been enabled Sanitize Command Dword 10 Information (SCDW10) : 0x2 Estimated Time For Overwrite : 0xffffffff (No time period reported) Estimated Time For Block Erase : 0xffffffff (No time period reported) Estimated Time For Crypto Erase : 0xffffffff (No time period reported) Estimated Time For Overwrite (No-Deallocate) : 0 Estimated Time For Block Erase (No-Deallocate) : 0 Estimated Time For Crypto Erase (No-Deallocate): 0 Sanitize Progress (SPROG) : 46583 (71.080017%) Sanitize Status (SSTAT) : 0x2 [2:0] Sanitize in Progress. ... Sanitize Progress (SPROG) : 65181 (99.458313%) Sanitize Status (SSTAT) : 0x2 [2:0] Sanitize in Progress. ... Sanitize Progress (SPROG) : 65535 Sanitize Status (SSTAT) : 0x101 [2:0] Most Recent Sanitize Command Completed Successfully. ...
Surprisingly, a full sanitize via block erase may complete in just a few seconds—even on a 960 GB SSD. This is much faster than writing zeroes to the entire disk with
dd
, which can be limited by PCIe bandwidth. That’s because sanitize commands are executed internally by the SSD controller. Instead of transferring data over the host interface, the controller issues low-level commands to the NAND flash directly. This enables parallel erasure across multiple NAND channels and bypasses the overhead of the file system or OS.Once complete, you can confirm that the SSD has been wiped by reading its contents:
sudo dd if=/dev/nvme1n1 status=progress | hexdump
Output:
0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 10496885248 bytes (10 GB, 9.8 GiB) copied, 17 s, 617 MB/s^C 21183835+0 records in 21183834+0 records out 10846123008 bytes (11 GB, 10 GiB) copied, 17.9428 s, 604 MB/s
The
*
in the output indicates repeated zeroes. It will take a long time to read the entire disk so I terminate the operation prematurely. After performing a sanitize operation—such as block erase—you might expect that reading raw data from the drive would return0xFF
values, since NAND flash cells are physically reset to all 1s when erased. However, if you use a tool likedd
combined withhexdump
, you’ll likely see all zeroes (0x00
) instead. This is because you’re not reading the physical state of the NAND directly; rather, you’re accessing logical block addresses (LBAs) through the SSD’s controller. When a sanitize operation completes, the SSD's firmware invalidates all existing LBA-to-physical mappings. According to the NVMe specification and common firmware behavior, any read from an unmapped LBA must return zero-filled data to ensure no residual information is exposed. This behavior ensures both security and consistency, even though the underlying flash cells contain erased (1
) states. This can be further verified from NVMe Command Set Specification.
Deallocation Read Behavior (DRB): This field indicates the deallocated logical block read behavior. For a logical block that is deallocated, this field indicates the values read from that deallocated logical block and its metadata (excluding protection information)
...
3.3.3.2.1 Deallocated or Unwritten Logical Blocks
A logical block that has never been written to, or which has been deallocated using the Dataset Management command, the Write Zeroes command or the Sanitize command is called a deallocated or unwritten logical block.
Using the Error Recovery feature (refer to section 4.1.3.2), host software may select the behavior of the controller when reading deallocated or unwritten blocks. The controller shall abort Copy, Read, Verify, or Compare commands that include deallocated or unwritten blocks with a status of Deallocated or Unwritten Logical Block if that error has been enabled using the DULBE bit in the Error Recovery feature. If the Deallocated or Unwritten Logical error is not enabled, the values read from a deallocated or unwritten block and its metadata (excluding protection information) shall be:
• all bytes set to FFh if the DRB field is set to 010b; or
• all bytes cleared to 0h if the Deallocation Read Behavior (DRB) field in the DLFEAT field is set to
001b;
• either all bytes cleared to 0h or all bytes set to FFh if the DRB field is cleared to 000b.
Let’s confirm this behavior through the NVMe identify namespace command:
sudo nvme id-ns -H /dev/nvme1n1
Output:
...
nsfeat : 0x1a
[4:4] : 0x1 NPWG, NPWA, NPDG, NPDA, and NOWS are Supported
[2:2] : 0 Deallocated or Unwritten Logical Block error Not Supported
[1:1] : 0x1 Namespace uses NAWUN, NAWUPF, and NACWU
[0:0] : 0 Thin Provisioning Not Supported
...
dlfeat : 9
[4:4] : 0 Guard Field of Deallocated Logical Blocks is set to 0xFFFF
[3:3] : 0x1 Deallocate Bit in the Write Zeroes Command is Supported
[2:0] : 0x1 Bytes Read From a Deallocated Logical Block and its Metadata are 0x00
According to the NVMe Command Set Spec section 3.3.3.2.1, if the DULBE bit (NSFEAT[2]
) is not supported and DLFEAT[2:0] = 001b
, then reads from deallocated or unwritten LBAs should return all 0x00
bytes. This confirms the observed behavior.
Final Thoughts
At this point, we’ve successfully performed a sanitize operation on an NVMe SSD. The nvme-cli
utility is a powerful and essential tool for system administrators and developers working with NVMe drives. It provides fine-grained control over device-level features and is well worth learning for anyone managing SSDs at scale.
References
SSDs: secure erase or sanitize? (https://www.microcontrollertips.com/ssds-secure-erase-sanitize-faq/)
NVMe Sanitize (https://tinyapps.org/docs/nvme-sanitize.html)
NVMe Base Spec (https://nvmexpress.org/wp-content/uploads/NVM-Express-Base-Specification-Revision-2.2-2025.03.11-Ratified.pdf)
NVMe Command Set Spec (https://nvmexpress.org/wp-content/uploads/NVM-Express-NVM-Command-Set-Specification-Revision-1.1-2024.08.05-Ratified.pdf)
Comments NOTHING