Image Captioning with Keras
Image Captioning System that generates natural language captions for any image.
The architecture for the model is inspired from “Show and Tell” [1] by Vinyals et al. The model is built using Keras library.
The project also contains code for Attention LSTM layer, although not integrated in the model.
The model is trained on Flickr8k Dataset
Although it can be trained on others like Flickr30k or MS COCO
The model has been trained for 20 epoches on 6000 training samples of Flickr8k Dataset. It acheives a BLEU-1 = ~0.59
with 1000 testing samples.
These requirements can be easily installed by:
pip install -r requirements.txt
model_weight.h5
to models
directorypython prepare_data.py
python eval_model.py -i [img-path]
After the requirements have been installed, the process from training to testing is fairly easy. The commands to run:
python prepare_data.py
python train_model.py
python eval_model.py
After training, evaluation on an example image can be done by running:python eval_model.py -m [model-checkpoint] -i [img-path]
Image | Caption |
---|---|
![]() |
Generated Caption: A white and black dog is running through the water |
![]() |
Generated Caption: man is skiing on snowy hill |
![]() |
Generated Caption: man in red shirt is walking down the street |
[1] Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan. Show and Tell: A Neural Image Caption Generator
[2] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
MIT License. See LICENSE file for details.