The paper explores connections between asymptotic complexity and generalised entropy. Asymptotic complexity of a language (a language is a set of finite or infinite strings) is a way of formalising the complexity of predicting the next element in a sequence: it is the loss per element of a strategy asymptotically optimal for that language. Generalised entropy extends Shannon entropy to arbitrary loss functions; it is the optimal expected loss given a distribution on possible outcomes. It turns out that the set of tuples of asymptotic complexities of a language w.r.t. different loss functions can be described by means of the generalised entropies corresponding to the loss functions.
A previous version of this paper was published in: Proceedings of the 20th Annual Conference on Learning Theory, COLT 2007, Lecture Notes in Computer Science, vol. 4539, Springer, 2007.