I believe this discussion is more related to home or small office - not an enterprise setup. This is the track I took in the other thread. Where he was asking for a 4 bay nas, neither model he was looking at would be used in an enterprise.
So I look at this from point of view of type of files I have in my home, and what I serve up off my storage.. These are media files, video and music mostly. All of which have no need of parity, since if they are lost I can replace them off their original media sitting on my self or if need be gotten again via other channels
Now what is critical is a small subset of these files, my home video for example. These I have backed up in multiple locations on different storage, cloud, other disk in different system, optical on my self and another copy at my son's home, etc. So why should I create parity for say my rip of scarface or my grateful dead cds? Now for piece of mind - these "critical" files are also duplicated onto another disk in the pool automatically, so you get the same sort of protection you get with raid 1, while only using a subset of your storage pool for these non replaceable "home movies".
Money spent on that parity seems wasted to me, if that drive died where those files are stored I could just rerip (replace from my backup). There is no critical need for these files to be online in case of disk failure. Which is what the purpose of raid - this has little use in home setup or where only a subset of files in the storage is considered of a critical nature and needs to be online even with hardware failure such as a disk.
In his example of 1TB of storage - why would raid 1 not be better option? He uses 2x3TB or even 2x2TB and he covers his storage needs at lower cost while still having room for growth that should cover him for quite a bit of time.
Raid 5 is better suited when you need specific amount of storage but can not achieve this within specific cost constraints with a mirrored setup, Say he needed 6TB of storage - well there is not 6TB disks as of yet.. So he could do say 4x3TB in raid 10 or 5, or he could do 3x3TB in raid 5, or 4x2TB in Raid 5, etc. But again what amount of that storage requires parity? All of it then sure raid 10 or 5 might make sense.
Or what if he has only 1 TB of critical and 5TB of stuff that is nice to have digital access to - like movies and music. I could accomplish that with 2x4TB in a pool where my 1TB is duplicated on each disk. And this leaves me 6TB of storage - 5 of it for my other stuff and 1TB of growth. At a much lower cost and better flexibility. Since I only need 2 disks. And such time that I need more space I could add another disk to the pool - and its connection and size is no matter, it could be say a 2TB esata or usb even, now if I wanted I cold duplicate my 1TB of critical to all 3 disks in the pool and still have an extra 1TB to play with. Lots of different scenarios viable in the growth of my storage pool.
Not having to put min 3 disks into use all at the same time, allow me to grow my storage using size and connection type that gives me best bang for the buck. As we all know, disks only get bigger, faster and cheaper next month. Such a methodology allows me to stagger disks purchase to take advantages of lower cost when I actually need the storage, not having to calculate how much I need to put online now to have what I need 2 years from now.
This can allow for retirement of your OLD disks before they fail as you just naturally grow your storage replacing older/slower/smaller disks with faster/bigger ones while not requiring more slots.
If need be I can move these disks in my pool to new box - not having to worry about the raid controller in it, or lack of one. Say I need to take a bunch of media to a remote location - I can just take the disk out of the pool and access the files directly via anything that can read the filesystem I used - in my case just common ntfs.
The software I am using monitors the disks, and can notify me of possible issues be it physical issues reading sectors on the disks or smart information pointing to possible failure, etc. It can even move files off those disks in the pool if space is available in the pool.
Lets take a look at your 4x3TB - from the math I have seen, there is something like a 56% chance that with reading 10TB of data that you will encounter an unreadable bit and your rebuild will fail. So when your 1 disks fails its a coin toss if your going to be able to rebuild the array from that parity you spent good money on creating. Also you more than likely built that array from disks purchased all at the same time, most likely in the same batch - once 1 disks fails in a batch, the probability of another disk failing in that same batch increases, etc.
What sort of disks are you using to create this raid in the first place - are they enterprise quality designed to be in a array where they are read and written too constantly? Those disks are normally more costly, does this added cost make sense in a home setup to serve media files?
Its great you have had great success with raid 5 in the past, does not mean it meets the needs of today or makes sense with the size and speed of disks that are available today and the other ways to merge them together to so that their combined space is accessible in one location.
Not talking enterprise where files need to be online 24/7/365 or money is lost.. Talking a home or small office, etc.. Small budgets, etc. Even then your seeing the enterprise move away from your typical raid arrays as as these disks get larger and larger the likely hood of failure on a rebuild grows.. From the math I have seen if your talking 100TB is like 99.9% sure your going to hit a unreadable bit trying to rebuild the array on a disk failure.
edit: BTW for anyone curious I am using https://stablebit.com/
and can not say enough about their support.. It just freaking rocks!! For the small cost of their software, you can not find better support. I recently ran into some issues using a 3TB 4k sector disk in my n40l where I use passthru or physical RDM to give access to the disk to the VM. So that it can read smart, etc. The windows vm just was not seeing the full size of the disk or the gpt information correctly, etc. Now the esxi saw it as 3TB no problem, and could manipulate partitions on it just fine, using partedutil, etc. But windows was reporting it as -128MB size or 0 in Disk Manager. If I connected it to a linux VM had no problems using gdisk to manipulate and verify the gpt, and parted to create partitions, etc. So I just created the 3TB partition in linux and then attached to my windows VM.. Working great - but the scanner portion of their software was just using the info windows was giving about the disk.. Which was not correct.. So in a few days chatting with them via their support system the developer created some new beta versions of the scanner software that looked to the disk directly for info when windows was reporting odd information.. Works great now, scanner and pool both report correct size of the disk, scanning of all the sectors works, etc. etc. They even offered to remote in and take a look if need be to get their software to work even clearly the issue is windows and or esxi passhtru, etc. In the long run no need for this - but did have the meeting scheduled, etc. I can not really say enough good things about their product and support.