Isn't that response against what you were saying earlier in the thread about 10^14 error rates (in WD Red) for RAID 5 almost certainly leading to a rebuild failure? I was under the same impression you were about rebuild failures until out of curiosity I searched and found the link I posted. Both what I linked and what you linked seem to come to the same conclusion that the probability can't be concluded using that math (my link due to them not finding those failure rates in practice, your link due to the assumptions being incorrect).
It doesn't disagree with it, but I think it offers additional information to the mix. I feel the reliability of RAID 5 on a consumer class HDD of a large size would be very suspect, but it probably won't end up in a death either depending on the RAID controller. Enterprise grade stuff like the LSI controller in my server will handle a rebuild failure a lot smoother than a crappy HiPoint or software RAID in my experience. Since the LSI can restart a rebuild you can recover from a failed read a lot easier.
I would say I would be suspect of a large drive RAID 5 array using consumer quality HDDs on a cheap RAID controller.
I make heavy use of RAID 5, but I currently keep it on SAS drives due to the increased reliability of the drives overall. Either way, backups are paramount. Without them you still have substantial risk on the table. Especially if you're not using an Enterprise grade controller. As a controller failure can kill the array since a lot of lower end stuff can't cross talk. I moved a RAID 5 array from a LSI 8308ELP to a Dell Perc H710 (LSI based controller, but 4 years or so newer) and the H710 imported the RAID array without any loss and rebuilt the parity to newer standards without loss. This would help in a hardware controller failure situation, but only if you have a quality hardware controller...