9 posts in this topic

Posted

I am getting ready to sell/give away some PCs. To securely erase their hard drives, I am using dd.

Below is what I have set to run (entered into a terminal):

dd if=/dev/zero bs=1024 of=/dev/sda && dd if=/dev/urandom bs=1024 of=/dev/sda && dd if=/dev/zero bs=1024 of=/dev/sda

I was expecting this to take probably about a day to complete. One machine (with a 640GB hard drive) has been going for a few hours now. Another machine (with a 200GB drive) seems to have completed (after about three to four hours), and displayed the following in the terminal:

dd: writing `/dev/sda': No space left on device

195360985+0 records in

195360984+0 records out

200049647616 bytes (200 GB) copied, 13789 s, 14.5 MB/s

I'm not sure if all three dd writes I specified took place, or if it got to the end of the hard disk after the first pass and quit. Could someone confirm?

I should have written a script instead of using "&&".

dd also doesn't display a progress bar. Should I also be specifying the size of the hard disk? If so, how is this done? This may also fix the problem above.

Share this post


Link to post
Share on other sites

Posted

dd if=/dev/zero bs=1024 of=/dev/sda count=3

Do that.

Nevermind actually that'll copy 3KB of data

Also, ps ax | grep dd then kill -USR1 (the_PID_of_DD_here), will get you the progress.

Share this post


Link to post
Share on other sites

Posted

doing anything with random is just a waste of time.. writing zeros is more than secure enough

simple

dd if=/dev/zero of=/dev/sda

is all that is needed, if you want to speed it up a bit more, use bs=1m

1 person likes this

Share this post


Link to post
Share on other sites

Posted

doing anything with random is just a waste of time.. writing zeros is more than secure enough

simple

dd if=/dev/zero of=/dev/sda

is all that is needed, if you want to speed it up a bit more, use bs=1M

(Y)

Share this post


Link to post
Share on other sites

Posted

Depends on the M/m thing, mac requires either lower or upper case and linux requires the inverse case, can never remember which way round it is or why it's such a stupid system.

Alternatively, there's always NAB (Nuke and Boot)

Share this post


Link to post
Share on other sites

Posted

Depends on the M/m thing, mac requires either lower or upper case and linux requires the inverse case, can never remember which way round it is or why it's such a stupid system.

Alternatively, there's always NAB (Nuke and Boot)

NAB (Nuke and Boot) aka dBan was brilliant but isnt it abandonware, There was a bug in the latest version where it almost never worked on any newer drives then the last version.

Share this post


Link to post
Share on other sites

Posted

Thanks for all your replies. :)

I ended up writing a script (I don't think using && is correct).

So far, it's been 48 hours, and the PC with the 640GB drive is still on the urandom phase! Looks like it won't complete for another day.

I thought I would do three passes for extra security. The PCs had some confidential data on them. I should ask someone at a hard disk recovery lab about this.

I will try specifying a bigger block size next time - I'm not sure if there are any downsides to this. The hard disks are being written to at about 15 to 20 MB/s at present.

Share this post


Link to post
Share on other sites

Posted

A single zerofill pass should probably be enough.

http://en.wikipedia....erwrites_needed

Which has a reference to:

http://www.wired.com...assange-laptop/

where it's mentioned that the army data forensic contractors could not recover anything from Manning's drive before the single zero-fill pass.

Share this post


Link to post
Share on other sites

Posted

"The PCs had some confidential data on them"

So -- were they the launch codes for the US Nuclear arsenal? Even if they were - single Zero write would remove the ability for anyone to recover.

Since when is 20MBps the speed of a modern hdd, this should of taken you a few minutes to accomplish not days and days!

here

http://www.vidarhole..._drive_data.pdf

Overwriting Hard Drive Data: The Great Wiping Controversy

Abstract. Often we hear controversial opinions in digital forensics on the required or desired number of passes to utilize for properly overwriting, sometimes referred to as wiping or erasing, a modern hard drive. The controversy has caused much misconception, with persons commonly quoting that data can be recovered if it has only been overwritten once or twice. Moreover, referencing that it actually takes up to ten, and even as many as 35 (referred to as the Gutmann scheme because of the 1996 Secure Deletion of Data from Magnetic and Solid-State Memory published paper by Peter Gutmann) passes to securely overwrite the previous data. One of the chief controversies is that if a head positioning system is not exact enough, new data written to a drive may not be written back to the precise location of the original data. We demonstrate that the controversy surrounding this topic is unfounded.

4 Conclusion

The purpose of this paper was a categorical settlement to the controversy surrounding the misconceptions involving the belief that data can be recovered following a wipe procedure. This study has demonstrated that correctly wiped data cannot reasonably be retrieved even if it is of a small size or found only over small parts of the hard drive. Not even with the use of a MFM or other known methods. The belief that a tool can be developed to retrieve gigabytes or terabytes of information from a wiped drive is in error.

Although there is a good chance of recovery for any individual bit from a drive, the chances of recovery of any amount of data from a drive using an electron microscope are negligible. Even speculating on the possible recovery of an old drive, there is no likelihood that any data would be recoverable from the drive. The forensic recovery of data using electron microscopy is infeasible. This was true both on old drives and has become more difficult over time. Further, there is a need for the data to have been written and then wiped on a raw unused drive for there to be any hope of any level of recovery even at the bit level, which does not reflect real situations. It is unlikely that a recovered drive will have not been used for a period of time and the interaction of defragmentation, file copies and general use that overwrites data areas negates any chance of data recovery. The fallacy that data can be forensically recovered using an electron microscope or related means needs to be put to rest.

1 person likes this

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Recently Browsing   0 members

    No registered users viewing this page.