Because of wear management and the way flash storage works, overwriting disks is even more useless for wiping data than it was on hard drives. Even on spinning rust there were plenty of files in relocated sectors, but on SSDs you get relocation behaviour and copied files without any actual damage to the disk.
You can overwrite a file on your SSD with random numbers supposedly filling the entire file’s space, but under the hood the SSD could be like “erasing this block would wear down the disk too much, let’s just copy the block some place else and map the data offset to this new set of cells”. Modern SSDs also have extra storage capacity so that wear leveling can be done without reducing your storage space in the process, and cells the SSD deems to be too unstable will be copied and unmapped. Their data will still be there, but it won’t be accessible to the computer, even if you overwrite the entire drive.
If you want to erase data, physically destroy the disk. If you want to prevent having to erase data, encrypt it (it’s on by default in Windows, Mac, and most Linux distros) so you only need to destroy the encryption key to make the data unreadable.
You can’t fill the drive. The drive decides when to use its buffered free storage blocks. It’s at the hardware level and only the Secure Erase command will clear it.
I’m not sure if filling up the entire drive is necessary. Nothing wrong with doing a ddif=/dev/urandom of=/dev/nvme1 to randomise the drive itself, but I don’t think most people are affected by the kind of information you can derive from what sectors are/aren’t written to.
Writing zeroes to every bit is useless because of the automatic remapping; it mostly serves to wear down the device if you use decent encryption. There are only so many write+erase cycles each cell can go through before it breaks, so I try to avoid doing large writes on purpose. Try a secure erase from either your UEFI GUI, but good encryption prevents the need for a full format.
Personally, I let my drives fill up over time. I trust LUKS enough to handle the encryption, and I don’t think anyone who’s going to be buying this SSD off me is going to send it off to a forensic data lab to analyse what the average size of the files I worked on was. So, my personal approach:
Buy a drive from a reputable brand with no known obvious firmware bugs (Sandstorm, anyone?)
a) why the fuck would they go to that effort for a filthy commoner like yourself, and b) what are the chances that 0.01% of recoverable data contains anything useful!?!
Nobody is gonna bother doing advanced forensics on 2nd hand storage, digging into megabytes of reallocated sectors on the off chance they to find something financially exploitable. That’s a level of paranoia no data supports.
My example applies to storage devices which don’t default to encryption (most non-OS external storage). It’s analogous to changing your existing encrypted disks password to a random-ass unrecoverable throwaway.
When we’re talking SSDs, we’re not talking a fee megabytes of relocated sectors. We’re talking numbers between 4½-560GB of spare capacity, almost guaranteed to be used, especially if you start filling up the drive.
If you’re assuming nobody is going to dig through the SSD, save yourself some time and issue a secure erase/crypto erase command and let the firmware figure it out. It’s faster and more reliable. If you have TRIM enabled (should be on by default on most operating systems), you may even be able to get away with simply clearing the recycling bin.
Because of wear management and the way flash storage works, overwriting disks is even more useless for wiping data than it was on hard drives. Even on spinning rust there were plenty of files in relocated sectors, but on SSDs you get relocation behaviour and copied files without any actual damage to the disk.
You can overwrite a file on your SSD with random numbers supposedly filling the entire file’s space, but under the hood the SSD could be like “erasing this block would wear down the disk too much, let’s just copy the block some place else and map the data offset to this new set of cells”. Modern SSDs also have extra storage capacity so that wear leveling can be done without reducing your storage space in the process, and cells the SSD deems to be too unstable will be copied and unmapped. Their data will still be there, but it won’t be accessible to the computer, even if you overwrite the entire drive.
If you want to erase data, physically destroy the disk. If you want to prevent having to erase data, encrypt it (it’s on by default in Windows, Mac, and most Linux distros) so you only need to destroy the encryption key to make the data unreadable.
If you want to keep/sell the drive…
Is that the best strategy? Or is anything outside of 2 and 3 redundant?
You can’t fill the drive. The drive decides when to use its buffered free storage blocks. It’s at the hardware level and only the Secure Erase command will clear it.
Right, I read some more of the comments and realized that’s what some of the “unreported space” is used for. Makes sense, thanks!
You fill up the usable space. Or the visible space. No one will disamble the device and read from the raw storage.
Then why do that when you can do a secure erase in seconds?
“Best” depends on your needs.
I’m not sure if filling up the entire drive is necessary. Nothing wrong with doing a
dd if=/dev/urandom of=/dev/nvme1
to randomise the drive itself, but I don’t think most people are affected by the kind of information you can derive from what sectors are/aren’t written to.Writing zeroes to every bit is useless because of the automatic remapping; it mostly serves to wear down the device if you use decent encryption. There are only so many write+erase cycles each cell can go through before it breaks, so I try to avoid doing large writes on purpose. Try a secure erase from either your UEFI GUI, but good encryption prevents the need for a full format.
Personally, I let my drives fill up over time. I trust LUKS enough to handle the encryption, and I don’t think anyone who’s going to be buying this SSD off me is going to send it off to a forensic data lab to analyse what the average size of the files I worked on was. So, my personal approach:
That makes sense. Thank you!
Nobody is gonna bother doing advanced forensics on 2nd hand storage, digging into megabytes of reallocated sectors on the off chance they to find something financially exploitable. That’s a level of paranoia no data supports.
My example applies to storage devices which don’t default to encryption (most non-OS external storage). It’s analogous to changing your existing encrypted disks password to a random-ass unrecoverable throwaway.
When we’re talking SSDs, we’re not talking a fee megabytes of relocated sectors. We’re talking numbers between 4½-560GB of spare capacity, almost guaranteed to be used, especially if you start filling up the drive.
If you’re assuming nobody is going to dig through the SSD, save yourself some time and issue a secure erase/crypto erase command and let the firmware figure it out. It’s faster and more reliable. If you have TRIM enabled (should be on by default on most operating systems), you may even be able to get away with simply clearing the recycling bin.