Jump to content



Photo

Using dd to securely erase a hard drive

linux unix securely erasing hard drive dd

  • Please log in to reply
8 replies to this topic

#1 68k

68k

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 20-January 10
  • Location: Australia

Posted 05 May 2013 - 11:58

I am getting ready to sell/give away some PCs. To securely erase their hard drives, I am using dd.

Below is what I have set to run (entered into a terminal):

dd if=/dev/zero bs=1024 of=/dev/sda && dd if=/dev/urandom bs=1024 of=/dev/sda && dd if=/dev/zero bs=1024 of=/dev/sda

I was expecting this to take probably about a day to complete. One machine (with a 640GB hard drive) has been going for a few hours now. Another machine (with a 200GB drive) seems to have completed (after about three to four hours), and displayed the following in the terminal:

dd: writing `/dev/sda': No space left on device
195360985+0 records in
195360984+0 records out
200049647616 bytes (200 GB) copied, 13789 s, 14.5 MB/s


I'm not sure if all three dd writes I specified took place, or if it got to the end of the hard disk after the first pass and quit. Could someone confirm?

I should have written a script instead of using "&&".

dd also doesn't display a progress bar. Should I also be specifying the size of the hard disk? If so, how is this done? This may also fix the problem above.


#2 n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 05 May 2013 - 12:06

dd if=/dev/zero bs=1024 of=/dev/sda count=3
Do that.

Nevermind actually that'll copy 3KB of data

Also, ps ax | grep dd then kill -USR1 (the_PID_of_DD_here), will get you the progress.

#3 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 84
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 05 May 2013 - 14:20

doing anything with random is just a waste of time.. writing zeros is more than secure enough

simple
dd if=/dev/zero of=/dev/sda

is all that is needed, if you want to speed it up a bit more, use bs=1m

#4 yxz

yxz

    Neowinian Senior

  • Joined: 11-January 09

Posted 05 May 2013 - 14:33

doing anything with random is just a waste of time.. writing zeros is more than secure enough

simple
dd if=/dev/zero of=/dev/sda

is all that is needed, if you want to speed it up a bit more, use bs=1M

(Y)

#5 n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 05 May 2013 - 15:26

Depends on the M/m thing, mac requires either lower or upper case and linux requires the inverse case, can never remember which way round it is or why it's such a stupid system.

Alternatively, there's always NAB (Nuke and Boot)

#6 +ChuckFinley

ChuckFinley

    Neowinian Senior

  • Joined: 14-May 03

Posted 06 May 2013 - 18:25

Depends on the M/m thing, mac requires either lower or upper case and linux requires the inverse case, can never remember which way round it is or why it's such a stupid system.

Alternatively, there's always NAB (Nuke and Boot)


NAB (Nuke and Boot) aka dBan was brilliant but isnt it abandonware, There was a bug in the latest version where it almost never worked on any newer drives then the last version.

#7 OP 68k

68k

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 20-January 10
  • Location: Australia

Posted 07 May 2013 - 11:57

Thanks for all your replies. :)

I ended up writing a script (I don't think using && is correct).

So far, it's been 48 hours, and the PC with the 640GB drive is still on the urandom phase! Looks like it won't complete for another day.

I thought I would do three passes for extra security. The PCs had some confidential data on them. I should ask someone at a hard disk recovery lab about this.

I will try specifying a bigger block size next time - I'm not sure if there are any downsides to this. The hard disks are being written to at about 15 to 20 MB/s at present.

#8 +GreenMartian

GreenMartian

    Neowinian Senior

  • Joined: 28-August 04
  • Location: adelaide, au

Posted 07 May 2013 - 12:15

A single zerofill pass should probably be enough.
http://en.wikipedia....erwrites_needed
Which has a reference to:
http://www.wired.com...assange-laptop/
where it's mentioned that the army data forensic contractors could not recover anything from Manning's drive before the single zero-fill pass.

#9 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 84
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 07 May 2013 - 12:57

"The PCs had some confidential data on them"

So -- were they the launch codes for the US Nuclear arsenal? Even if they were - single Zero write would remove the ability for anyone to recover.

Since when is 20MBps the speed of a modern hdd, this should of taken you a few minutes to accomplish not days and days!

here
http://www.vidarhole..._drive_data.pdf

Overwriting Hard Drive Data: The Great Wiping Controversy

Abstract. Often we hear controversial opinions in digital forensics on the required or desired number of passes to utilize for properly overwriting, sometimes referred to as wiping or erasing, a modern hard drive. The controversy has caused much misconception, with persons commonly quoting that data can be recovered if it has only been overwritten once or twice. Moreover, referencing that it actually takes up to ten, and even as many as 35 (referred to as the Gutmann scheme because of the 1996 Secure Deletion of Data from Magnetic and Solid-State Memory published paper by Peter Gutmann) passes to securely overwrite the previous data. One of the chief controversies is that if a head positioning system is not exact enough, new data written to a drive may not be written back to the precise location of the original data. We demonstrate that the controversy surrounding this topic is unfounded.

4 Conclusion
The purpose of this paper was a categorical settlement to the controversy surrounding the misconceptions involving the belief that data can be recovered following a wipe procedure. This study has demonstrated that correctly wiped data cannot reasonably be retrieved even if it is of a small size or found only over small parts of the hard drive. Not even with the use of a MFM or other known methods. The belief that a tool can be developed to retrieve gigabytes or terabytes of information from a wiped drive is in error.

Although there is a good chance of recovery for any individual bit from a drive, the chances of recovery of any amount of data from a drive using an electron microscope are negligible. Even speculating on the possible recovery of an old drive, there is no likelihood that any data would be recoverable from the drive. The forensic recovery of data using electron microscopy is infeasible. This was true both on old drives and has become more difficult over time. Further, there is a need for the data to have been written and then wiped on a raw unused drive for there to be any hope of any level of recovery even at the bit level, which does not reflect real situations. It is unlikely that a recovered drive will have not been used for a period of time and the interaction of defragmentation, file copies and general use that overwrites data areas negates any chance of data recovery. The fallacy that data can be forensically recovered using an electron microscope or related means needs to be put to rest.



Click here to login or here to register to remove this ad, it's free!