CRC errors (was:Tim's own version of the Catweasel/Compaticard/whatever)

From: Mike Cheponis <mac_at_Wireless.Com>
Date: Thu Jul 6 18:25:44 2000

No, Dwight, there is a difference between CRC and ECC. CRC will detect
errors, but ECC can correct errors, too.

For single bit correction, the number of extra bits required is M+1 (M is
the log2(# of bits)); so if you have a 16-bit memory, 21 bits will correct
single-bit errors (and will detect most multi-bit errors).

There are also games that you can do to interleave your bits, so that a
burst error (that you describe, 12 bits in a row) can in fact be corrected.

Coding Theory is a deep subject with many advanced types of codes possible
for particular error probability density functions. And I'm certainly no
expert at it!.

-Mike Cheponis

On Thu, 6 Jul 2000, Dwight Elvey wrote:

> mann_at_pa.dec.com (Tim Mann) wrote:
> >
> > Another neat trick might be to notice when there is a CRC error and/or
> > a clock violation, and in that case backtrack to a recent past decision
> > where the second most likely alternative was close to the most likely,
> > try it the other way, and see if the result looks better. Obviously one
> > can't overdo that or you'll just generate random data with a CRC that
> > matches by chance, but since the CRC is 16 bits, I'd think it should be
> > OK to try a few different likely guesses to get it to match.
>
> Hi
> CRC's are quite good at fixing a single small burst. As I recall,
> CRC32 can fix a single error burst up to 12 bits long. The
> error correcting method is based on the cycle length of the original
> polynomial relative to the length of the data block. What this
> means is that if you have a burst longer than 12 bits, it is
> more likely that the errors will appear to be outside the data
> block than within the data block. In this case, you would have
> what is called an uncorrectable error. No amount of fiddling
> will give you a correction. If there is an error in the CRC
> data, you have the same ratio. As an example, using CRC32 and
> a data block of 512, the probability of an error looking correctable
> that was not is 512/(2^32 -1) or 1.2X10^-7 of errors greater
> than 12 bits will seem correctable. All errors that happen
> within a 12 bit window are 100% correctable.
> Does this make any sense?
> Dwight
>
>
Received on Thu Jul 06 2000 - 18:25:44 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:32:56 BST