Header menu link for other important links
X
Attentive Semantic Video Generation Using Captions
T. Marwah, G. Mittal,
Published in Institute of Electrical and Electronics Engineers Inc.
2017
Volume: 2017-October
   
Pages: 1435 - 1443
Abstract
This paper proposes a network architecture to perform variable length semantic video generation using captions. We adopt a new perspective towards video generation where we allow the captions to be combined with the long-term and short-term dependencies between video frames and thus generate a video in an incremental manner. Our experiments demonstrate our network architecture's ability to distinguish between objects, actions and interactions in a video and combine them to generate videos for unseen captions. The network also exhibits the capability to perform spatio-temporal style transfer when asked to generate videos for a sequence of captions. We also show that the network's ability to learn a latent representation allows it generate videos in an unsupervised manner and perform other tasks such as action recognition. © 2017 IEEE.
About the journal
JournalData powered by TypesetProceedings of the IEEE International Conference on Computer Vision
PublisherData powered by TypesetInstitute of Electrical and Electronics Engineers Inc.
ISSN15505499