Local algorithms & Error-correction
Madhu Sudan
MIT CSAIL
The increasing imbalance between size of data and computational
power available to users has led to the notion of ``sublinear time algorithms",
namely algorithms that run in time less than the size of the data they
are processing. This notion seems to be especially relevant in the context
of error-detection/correction where the notion suggests that one could
get the reliability benefits offered by packaging data in large blocks, while
not losing (proportionately) in the time delay of error-detection/decoding.
Codes that allow for such effects are known as locally testable codes
and locally decodable codes.
An ideal locally testable code would allow one to store vast amounts of
data at constant rate, where the probability of losing the data would decay
exponentially with the length of the data, but where estimating # of errors takes
roughly as much time as takes to read one bit of the data! Are such codes
possible? So far we don't know, but surprisingly good codes are known.
In this talk I will survey some of the literature and describe the open questions.