Real images often lie on a union of disjoint manifolds rather than one globally connected manifold, and this can hinder the training of common Generative Adversarial Networks (GANs). We first show that single generator GANs are unable to correctly model a distribution supported on a disconnected manifold, and investigate the consequences. Next, we show how using a collection of generators can address this problem. Finally, we explain the serious issues caused by considering a fixed prior over the collection of generators and propose a novel approach for learning the prior and inferring the necessary number of generators without any supervision.