The information theory community has traditionally studied two different models for communication. The Shannon-theoretic model treats the channel’s impact as random, so codes must correct most error patterns of a given weight; this as an average-case analysis. The coding-theoretic (Hamming-theoretic?) model treats the channel as adversarial, so codes must correct all error patterns of a given weight; this is a worst-case analysis. Between the two lie several different channel models which can be usefully described in the language of arbitrarily varying channels (AVCs). In an AVC, the communication channel has two inputs at each time, one for the encoder and one for a state that is controlled by an adversary who wishes to foil the communication. The difference between average- and worst-case can be captured by changing the information available to the adversary. In this talk I will describe this model and recent results on how the adversary can and cannot benefit from partial knowledge of the transmitted codeword. In particular, I will discuss how knowledge of delayed or future encoder inputs affect the channel capacity in sometimes surprising ways.
Anand D. Sarwate is an Assistant Professor in the ECE Department at Rutgers. He currently works on information theory, machine learning, high-dimensional statistics, and signal processing with applications in distributed systems, privacy and security, and biomedical research.