Title: Representing maximum classes and sample compression
Speaker: Hyam Rubinstein
Abstract: This is joint work with Ben Rubinstein. In statistical learning theory, the aim is to predict a concept from amongst an appropriate class, choosing the best fit to a given sample. A classical result is that a concept class is learnable (in the sense of Valiant) if and only if it has finite "VC dimension", which is a measure of complexity. An interesting question is whether learnability is also equivalent to having a finite size compression scheme. Ben and I have recently shown that every maximum concept class can be represented by a simple hyperplane arrangement, answering a conjecture of Kuzmin and Warmuth. This gives a nice geometric way of showing that maximum classes have compression schemes and the background to it will be described in a talk suitable for a general audience, covering ideas from combinatorics, geometry and topology.