Recommended Posts

Depends on:

- how often you synch. Is that 200GB a day?

- depends on the Hard drive. Is it an older HDD or an SDD? They have different MTBFs

 

But in general, HDDs and SSDs have a number in their specs that says how many GB Writes a day they can take and for how long before they start to become unreliable (MTBF). This number is usually so high that you never reach it in your lifetime. If you use a lot more writes in a day than foreseen, then yes, you will most likely reduce the lifetime of the drive, but it would still be very very long. This number is different for each drive, and also for HDDs and SSDs, which is why I was asking what model it is. We could then check the exact number and tell you the impact.

  On 22/06/2017 at 06:40, Odom said:

Depends on:

- how often you synch. Is that 200GB a day?

- depends on the Hard drive. Is it an older HDD or an SDD? They have different MTBFs

 

But in general, HDDs and SSDs have a number in their specs that says how many GB Writes a day they can take and for how long before they start to become unreliable (MTBF). This number is usually so high that you never reach it in your lifetime. If you use a lot more writes in a day than foreseen, then yes, you will most likely reduce the lifetime of the drive, but it would still be very very long. This number is different for each drive, and also for HDDs and SSDs, which is why I was asking what model it is. We could then check the exact number and tell you the impact.

Expand  

Thanks for the info! Actually I'm using an HDD and does not do much writes in a day. But I'm a bit worried about the constant monitoring of the file system which also takes some cpu usage. Can that hurt the drives?

What monitoring of the file system are you talking about? Most synch applications, when backing up files in delta (only new or changed files since the last run) only check your files at each run. Though this depends on the application.

I use a backup tool at home that backs up my data. It runs Monday, Wednesday and Friday. Monday it runs for the first time and does a full backup. It goes through my entire drive to check the files, they are not at the destination, so they are copied. On Wednesday when the backup runs again, it goes through all my files (but reads only) to determine if the previously backed up files already exist at the destination or not, and if yes if they were changes since the last run. If yes, those files will be backed up. So, Monday, first backup is a Read and Write, and Wednesday where not much changed is mostly read.

 

This obviously depends on your strategy. If you actually do copy the same data in its entirety every single time, then obviously you would also have a lot of Writes every time. Your files are not constantly monitored.

 

If you use a synch tool that copies data as soon as it is changed, then the tool is not monitoring your HDD, it is monitoring the Write events to the folders you specified. If there are none, then there is nothing to monitor. If there are some, the app is notified by Windows that writes are occuring and once they are completed the tool will back up the data.

If you have a synch on a location with 200GB of data and that data changes constantly in a day, then it will also be copied all the time.

 

As to the CPU usage, just open your Task Manager and look at the resourced the application is using, that is easy to check. Try and have the Task Manager open, make some changes to those files and you'll see how it reacts.

This topic is now closed to further replies.