Generative adversarial networks have shown promise in generating images and videos. However, they suffer from the mode collapse issue which prevents it from generating complex multi-modal data. In this paper, We propose an approach to mitigate the mode collapse issue in generative adversarial networks (GANs). We propose to use multiple generators to capture various modes and each generator is encouraged to learn a different mode through a novel loss function. The generators are trained in a sequential way to effectively learn multiple modes. The effectiveness of the proposed approach is demonstrated through experiments on a synthetic data set, image data sets such as MNIST and fashion MNIST, and in multi-topic document modelling. © 2020, Springer Nature Switzerland AG.