Robust low-degree testing Madhu Sudan (Microsoft Research) Given a function f:F^m to F for a finite field F and integer d, a low-degree tester is a procedure that randomly samples the value of f on a few (potentially correlated) points and makes decision to accept/reject f based on this local view of f. Ideally the tester should accept polynomials of degree at most d with probability 1, and reject functions that are far (in normalized Hamming distance) from every degree d polynomial with probability growing with the distance, and it should do while keeping the local view small. A robust tester is even more ambitious: It would like the local views to be far from acceptable views (again in normalized Hamming distance) if the function being tested is far from acceptable strings. Theoretical computer science has long been interested in the study of low-degree testing --- good tests and analyses have found applications in the fields of probabilistically checkable proofs, locally testable codes, algebraic pseudorandomness, and most recently in the construction of some extremal small set expanders. In the talk I will briefly introduce the problem, mention some of the connections, and explain some basic lower bounds on the size of the local views that a tester can hope to work with. I will summarize some results that show that how testers come close to the lower limits while being robust. Time permitting I will describe a recent approach to robust analysis of low-degree testing via a very general abstract view that only uses the fact that low-degree polynomials are invariant under affine transformations of the domain (F^m), and that they form linear error-correcting codes of good-distance. Based on a long sequence of works, including some recent work with Alan Guo (MIT) and Elad Haramaty (Northeastern).