I just learned the hard way not to trust New File System Technologies, and wanted to share my experience with you all, in hope you can learn from my mistakes.
I received two new drives in the mail yesterday, and after 24 hours of stress testing in a seperate system, I went to put them in my Server case. My server consists of the following:
LSI 9260-8i raid controller
Intel RES2SV240NC sas expander
4x Supermicro 5 in 3 drive cages
4x Samsung HD204UI 2TB drives (RAID 10)
5x Seagate 3TB drives (RAID 5)
The Samsung drives were formatted as ReFS, and were used solely for my virtual machines (Email, nzb downloads, etc). When I went to insert the two new drives, something happened, and the drive lights lit up on all bays occupied. I rebooted the server, and when it came back up, it said the cache was lost but the controller recovered, and it came back up fine, except Hyper-V would not load any VMS. I checked in windows, and drive E (my VMS /ReFS drive) did not have a full/empty bar. I clicked it, and got a message saying that the drive repair was unsuccessful.
So what does this mean? Something happened to the supposedly "Resilient" file system, it couldn't repair the issue, and it basically wiped my drives (well not wiped, but it thinks it's empty, and won't let me access it" see image:
the only fix is to reformat and restore from backup, which in my case is about a month old.
Moral of the story? Don't trust ReFS yet, and keep better backups. My two other Volumes were fine, both of which are NTFS volumes. I have reformated the RAID 10 volume as NTFS instead of ReFS, and will be investing in a battery backup unit for the LSI controller, just in case this happens again.