Jump to content
|Topic||Stats||Last action by|
|Windows 9 and Windows 365 expected to See The Light Of Day||
|Trying to setup my external HD to home network for all computers to access...confused tho...||
|Elementary OS Beta Freya... best linux ever!||
|Official Dogs vs Cats||
|Best value Micro ATX (SFX?) power supply?||
Posted 23 August 2012 - 08:07
Posted 23 August 2012 - 08:44
Posted 23 August 2012 - 09:36
Posted 23 August 2012 - 11:29
Is it worth the price premium, when just "0.22% of DIMMs suffer an ECC-correctable error every year" : source
ECC really is only needed in the most important of applications - is what you need THAT important?
About a third of machines and over 8% of DIMMs in
our fleet saw at least one correctable error per year. Our
per-DIMM rates of correctable errors translate to an aver-
age of 25,000–75,000 FIT (failures in time per billion hours
of operation) per Mbit and a median FIT range of 778 –
25,000 per Mbit (median for DIMMs with errors), while pre-
vious studies report 200-5,000 FIT per Mbit. The number of
correctable errors per DIMM is highly variable, with some
DIMMs experiencing a huge number of errors, compared to
others. The annual incidence of uncorrectable errors was
1.3% per machine and 0.22% per DIMM.
The conclusion we draw is that error correcting codes are
crucial for reducing the large number of memory errors to
a manageable number of uncorrectable errors. In fact, we
found that platforms with more powerful error codes (chip-
kill versus SECDED) were able to reduce uncorrectable er-
ror rates by a factor of 4–10 over the less powerful codes.
Nonetheless, the remaining incidence of 0.22% per DIMM
per year makes a crash-tolerant application layer indispens-
able for large-scale server farms.
Posted 04 September 2012 - 01:42
Posted 05 December 2012 - 22:16
For a real business server? Not using ECC is silly.
For your home server? Using ECC is probably a waste of money.
Posted 05 December 2012 - 22:20