• Sign in to Neowin Faster!

    Create an account on Neowin to contribute and support the site.

A peculiar problem with a HDD

Recommended Posts

Yogurth    2,188

  Few days ago I bought Nvidia GTX 1060 to replace my aging 560 Ti, placed in in the system booted up, updated Nvidia drivers and rebooted to a stuck BIOS. After disconnecting one component after another I realized that my 3TB WD Red was the culprit or so I thought. I reinstalled the system using only the boot drive just to be sure something wasn't off with the system itself.

  Hooked up everything afterwards and the same symptom reappeared, the stuck BIOS. Removed WD Red and placed it in a USB HDD cradle(dock), tried rebooting and greeted with the same stuck BIOS. Unhooked the drive from USB and booted normally to Windows 10. Hooked up the drive while Windows was running and the drive shows up without any kind of problem or error in fully working condition.

 I have 3 more HDDs within my box, one being the same 3TB WD Red as the one that I removed and the motherboard has no issues booting with it.

 

Any ideas what is causing this erratic behavior?

 

My Box with Windows 10 x64 Pro: i7 3770K, Asus P8Z77-V LX2, 16GB of RAM, Nvidia GTX1060 3GB, 150GB WD Raptor, 1TB WD Caviar Black, 3TB WD Red, ASUS DVD R/RW and the removed WD Red.

Share this post


Link to post
Share on other sites
Matthew S.    643

Drive is starting to fail

Share this post


Link to post
Share on other sites
Mindovermaster    1,674
5 minutes ago, Matthew S. said:

Drive is starting to fail

Yeah, what he said, backup everything NOW, don't wait until it is gone.

Share this post


Link to post
Share on other sites
Yogurth    2,188
27 minutes ago, Mindovermaster said:

Yeah, what he said, backup everything NOW, don't wait until it is gone.

It is a backup drive, the originals are all good :) However I failed to mention that I did check the drive with few HDD utilities and it came out perfect in all of them. S.M.A.R.T is also perfect.

Share this post


Link to post
Share on other sites
Mindovermaster    1,674
9 minutes ago, Yogurth said:

It is a backup drive, the originals are all good :) However I failed to mention that I did check the drive with few HDD utilities and it came out perfect in all of them. S.M.A.R.T is also perfect.

What did you use besides SMART? Software doesn't check for mechanical defects. It could be a problem with the SATA header. Could be really anything. Just get that replaced or RMA'd if you still have warranty on it.

Share this post


Link to post
Share on other sites
Yogurth    2,188
Posted (edited)
19 minutes ago, Mindovermaster said:

What did you use besides SMART? Software doesn't check for mechanical defects. It could be a problem with the SATA header. Could be really anything. Just get that replaced or RMA'd if you still have warranty on it.

I used HDD Sentinel, HDD Scan and CrystalDiskInfo, The drive in question is WD30EFRX, the most reliable drive on the planet :) with 0,0 failure rate. This might not mean much but the timing  and nature of the problem confuses me, it coincided with the arrival of the new card and it works if used from a cradle. This makes me think that the GPU perhaps occupies more lanes and with that preventing the last SATA port working...if that makes any sense.

Share this post


Link to post
Share on other sites
Mindovermaster    1,674
2 minutes ago, Yogurth said:

I used HDD Sentinel, HDD Scan and CrystalDiskInfo, The drive in question is WD30EFRX, the most reliable drive on the planet :) with 0,0 failure rate. This might not mean much but the timing  and nature of the problem confuses me, it coincided with the arrival of the new card and it works if used from a cradle. This makes me think that the GPU perhaps occupies more lanes and with that preventing the last SATA port working...if that makes any sense.

Now, that could easily be. If you can just get an enclosure, that might be better....

Share this post


Link to post
Share on other sites
Jim K    12,286
16 minutes ago, Yogurth said:

This makes me think that the GPU perhaps occupies more lanes and with that preventing the last SATA port working...if that makes any sense.

That's not it.

 

Have you tried a different data cable?  Have you tried a different SATA port?  Have you tried a known good drive on the "last port" to rule out drive vs. port?

Share this post


Link to post
Share on other sites
Yogurth    2,188
3 minutes ago, Jim K said:

That's not it.

 

Have you tried a different data cable?  Have you tried a different SATA port?  Have you tried a known good drive on the "last port" to rule out drive vs. port?

Different cable yes with same results, different port not yet. Will try tomorrow if I can find the time.

Share this post


Link to post
Share on other sites
+DevTech    1,405
On 3/12/2019 at 5:53 PM, Yogurth said:

I used HDD Sentinel, HDD Scan and CrystalDiskInfo, The drive in question is WD30EFRX, the most reliable drive on the planet :) with 0,0 failure rate. This might not mean much but the timing  and nature of the problem confuses me, it coincided with the arrival of the new card and it works if used from a cradle. This makes me think that the GPU perhaps occupies more lanes and with that preventing the last SATA port working...if that makes any sense.

"WD30EFRX, the most reliable drive on the planet :) with 0,0 failure rate"

 

NOT possible.

 

What LSD Soaked Source on the internet provided that information?

 

 

Share this post


Link to post
Share on other sites
Yogurth    2,188
12 hours ago, DevTech said:

"WD30EFRX, the most reliable drive on the planet :) with 0,0 failure rate"

 

NOT possible.

 

What LSD Soaked Source on the internet provided that information?

 

 

https://www.makeuseof.com/tag/most-reliable-hard-drives/

Share this post


Link to post
Share on other sites
Matthew S.    643

thats a good load of FUD that I've seen in a while.  No drive and I mean NO DRIVE can possibly have a 0.0 failure rate, drive failures are a random event at best.

  • Like 1

Share this post


Link to post
Share on other sites
neoraptor    56

Any chance the new video card is eating more power and there isn't enough ?

  • Like 1

Share this post


Link to post
Share on other sites
Yogurth    2,188
1 hour ago, Matthew S. said:

thats a good load of FUD that I've seen in a while.  No drive and I mean NO DRIVE can possibly have a 0.0 failure rate, drive failures are a random event at best.

"As of June 30, 2018 we had 100,254 spinning hard drives in Backblaze’s data centers. Of that number, there were 1,989 boot drives and 98,265 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. We’ll also take another look at comparing enterprise and consumer drives, get a first look at our 14 TB Toshiba drives, and introduce you to two new SMART stats. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments."

 

 

This is the main source of the results, looks rather legit. That doesn't mean that not a single drive will fail during the lifetime but these were directly compared to other enterprise drives and still came on top. That says something,

37 minutes ago, neoraptor said:

Any chance the new video card is eating more power and there isn't enough ?

While I still did not perform the drive and port swaps just yet, my previous card the 560 Ti was way less efficient than the 1060, so that most likely isn't the issue.

Share this post


Link to post
Share on other sites
+DevTech    1,405
39 minutes ago, Yogurth said:

"As of June 30, 2018 we had 100,254 spinning hard drives in Backblaze’s data centers. Of that number, there were 1,989 boot drives and 98,265 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. We’ll also take another look at comparing enterprise and consumer drives, get a first look at our 14 TB Toshiba drives, and introduce you to two new SMART stats. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments."

 

 

This is the main source of the results, looks rather legit. That doesn't mean that not a single drive will fail during the lifetime but these were directly compared to other enterprise drives and still came on top. That says something,

 

 

 

I have a thread where I try to track HD reliability:

 

 

For convenience, the data:

 

https://www.backblaze.com/blog/hard-drive-stats-for-2018/

 

So, I need to stick with the LSD Theory that a human could interpret that data to get the ridiculous thought that "Hey, That WD Red is The Most Reliable Drive"

 

 

1. WDC 3 TB --> Completely pulled from service in 2018

 

2. WDC 4 TB --> Completely pulled from service in 2018

 

3. WDC 6 TB --> A horrible failure rate of 5.5 % in 2016 which settled to 2% in 2017 and again in 2018 and obviously planned for phase-out.

 

 

Only a complete idiot (or Fox News) would interpret that data and conclude that the WD is the most reliable drive!

 

For some rough analysis of which drives might be worth buying based on the real data, see my above mentioned thread.

 

 

 

 

 

 

Share this post


Link to post
Share on other sites
SnailSlug    9

So Backblaze lifetime failure rates for the WD30EFRX was 4.96% over 1.3 million days of use between April 2013 and June 2018. There are no more WD30EFRX drives in their data centers. All of these were replaced by the next quarter. In fact, they no longer have any 3TB drives.

 

It was the highest failure rate for drives active in that quarter. It was ten times higher than the HMS5C4040ALE640 and BLE640, which had failure rates of half a percent over 9.8 million and 12.2 million days of use.

 

The vast majority of drives fail either within one year of operation or after several years of operation. It is extremely likely that the drive is failing. You can go ahead and ignore us if you want, but it's extremely likely that you'll lose all your data.

Share this post


Link to post
Share on other sites
+DevTech    1,405
2 minutes ago, SnailSlug said:

So Backblaze lifetime failure rates for the WD30EFRX was 4.96% over 1.3 million days of use between April 2013 and June 2018. There are no more WD30EFRX drives in their data centers. All of these were replaced by the next quarter. In fact, they no longer have any 3TB drives.

 

It was the highest failure rate for drives active in that quarter. It was ten times higher than the HMS5C4040ALE640 and BLE640, which had failure rates of half a percent over 9.8 million and 12.2 million days of use.

 

The vast majority of drives fail either within one year of operation or after several years of operation. It is extremely likely that the drive is failing. You can go ahead and ignore us if you want, but it's extremely likely that you'll lose all your data.

Yeah that's correct. There are ZERO decent WDC drives in the Backblaze data. Too bad, but that's the REAL DATA.

 

https://www.backblaze.com/blog/hard-drive-stats-for-2018/

 

More recent link.

 

Either way, taking the ONLY known source of REAL HARD DRIVE RELIABILITY DATA and then falsely interpreting it in a backwards manner to promote a particular product is not just WRONG, it is downright perverted...

 

Share this post


Link to post
Share on other sites
Jim K    12,286

So, we're now we're just going to ramble on about how Reds fail?  How does this help the OP pinpoint why his computer freezes during post when the HDD is plugged in .... but works fine in an external cradle after system post.

 

Assuming that all 6 of the SATA ports are being used ... has the OP tried the troubled HDD on a known good port (while leaving the "bad" port free)?  Has the OP tried a known good HDD on the port which the troubled HDD was plugged into?  As simple as swapping cables to rule out board vs. HDD issue.

 

 

4 hours ago, neoraptor said:

Any chance the new video card is eating more power and there isn't enough ?

A GTX1060 requires less power than his older 560ti...about 40-50 less watts.

Share this post


Link to post
Share on other sites
SnailSlug    9

Well you see, when you post a hard drive problem and then retort with a hard drive failure statistic, you tend to get an analysis of your own retort.

 

It's entirely possible for a drive to have only the bootup relevant portions corrupted. In such a case, it won't matter until it's accessed, which makes a huge difference if you're not booting with it plugged in. Posting a drive is significantly different from trying to use a drive in a computer that has already posted.

 

Also, roll a d20 once a day for a year. If it comes up on 1, throw your drive in the trash. Does that sound terribly relevant to you? Because that's the kind of drive op is asking about.

Share this post


Link to post
Share on other sites
Tidosho    631
9 hours ago, SnailSlug said:

The vast majority of drives fail either within one year of operation or after several years of operation. It is extremely likely that the drive is failing. You can go ahead and ignore us if you want, but it's extremely likely that you'll lose all your data.

Gotta love the Bathtub Curve, eh? Still doesn't generate as much paranoia in the general computing public enough to do backups though :)

 

Your issue could be a fluke, and just one of those weird IT and electronics moments, but I'd still get ready to replace it, stats or no stats. It doesn't mean YOUR drive won't fail. Run a full surface regeneration with HD Sentinel to be certain. If it passes you've got reassurance, but still, don't be lead into a false sense of security by some datacentre's statistics. Their drives are a tiny drop of the population of the drives out there.

 

And yeah, HD Sentinel.... I wouldn't run a server without it monitoring my drives, sitting in my tray. The email warnings of failure are a godsend. Oh, and buy a licence for it, well worth the money.... :)

  • Like 1

Share this post


Link to post
Share on other sites
+DevTech    1,405
7 hours ago, Jim K said:

So, we're now we're just going to ramble on about how Reds fail?  How does this help the OP pinpoint why his computer freezes during post when the HDD is plugged in .... but works fine in an external cradle after system post.

 

Assuming that all 6 of the SATA ports are being used ... has the OP tried the troubled HDD on a known good port (while leaving the "bad" port free)?  Has the OP tried a known good HDD on the port which the troubled HDD was plugged into?  As simple as swapping cables to rule out board vs. HDD issue.

 

 

A GTX1060 requires less power than his older 560ti...about 40-50 less watts.

I personally think it is a VITAL FUNCTION of what we hope to accomplish in the forums to NOT promote BAD or FALSE information and to correct it whenever we come across it.

 

I like WD Reds and have never had an issue with one, but there is ZERO support in the Backblaze.com data to support what was a FALSE CLAIM.

 

The usefulness and applicability of the Backblaze data is discussed in my HD Reliability thread.

 

---------------------

 

As for the particular issue in this thread, I see NO evidence so far to support the idea that the WD Red is the cause of the problem but testing it is always a good idea.

 

Power drain also seems highly unlikely.

 

Motherboards can reallocate PCIe lanes for other devices so that is more worthy of consideration, I think. 

 

 

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.