Variational Animal Motion Embedding
This version of VAME is deprecated and no longer maintained, and is made available here as legacy code. VAME is now being maintained at its new home at https://github.com/EthoML/VAME. There, you will find updated documentation and additional packages. Users can also access a downloadable desktop app for VAME at https://github.com/EthoML/vame-desktop.
VAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a PyTorch based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every step of the input time series.
The workflow of VAME consists of 5 steps and we explain them in detail here.
To get started we recommend using Anaconda with Python 3.6 or higher.
Here, you can create a virtual enviroment to store all the dependencies necessary for VAME. (you can also use the VAME.yaml file supplied here, byt simply openning the terminal, running git clone https://github.com/LINCellularNeuroscience/VAME.git
, then type cd VAME
then run: conda env create -f VAME.yaml
).
python setup.py install
in order to install VAME in your active conda environment.First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our paper, we were using a single Nvidia GTX 1080 Ti GPU to train our network. A hardware guide can be found here. Once you have your hardware ready, try VAME following the workflow guide.
If you want to follow an example first you can download video-1 here and find the .csv file in our example folder.
VAME was developed by Kevin Luxem and Pavol Bauer.
The development of VAME is heavily inspired by DeepLabCut.
As such, the VAME project management codebase has been adapted from the DeepLabCut codebase.
The DeepLabCut 2.0 toolbox is © A. & M.W. Mathis Labs deeplabcut.org, released under LGPL v3.0.
The implementation of the VRAE model is partially adapted from the Timeseries clustering repository developed by Tejas Lodaya.
VAME preprint: Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion
Kingma & Welling: Auto-Encoding Variational Bayes
Pereira & Silveira: Learning Representations from Healthcare Time Series Data for Unsupervised Anomaly Detection
See the LICENSE file for the full statement.