Powered by PyTorch
Powered by PyTorch
Built on top of PyTorch which allows using all of its components.
SOTA Self-Supervision methods
SOTA Self-Supervision methods
Reproducible reference implementation of SOTA self-supervision approaches (like SimCLR, MoCo, PIRL, SwAV etc) and their components that can be reused. Also supports supervised trainings.
Benchmark tasks
Benchmark tasks
Variety of benchmarks tasks (linear image classification, full finetuning, semi-supervised, low-shot, nearest neighbor, object detection) available to evaluate models.
Scalable
Scalable
Easy to train model on 1-gpu, multi-gpu and multi-node. Seamless scaling to large scale data and model sizes with FP16, LARC etc
Get Started
Install VISSL:
via conda:conda create -n vissl python=3.8 conda activate vissl conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2 conda install -c vissl -c iopath -c conda-forge -c pytorch -c defaults apex vissl
Download SimCLR yaml config and builtin distributed launcher:
cd /tmp/ && mkdir -p /tmp/configs/config wget -q -O configs/__init__.py https://dl.fbaipublicfiles.com/vissl/tutorials/configs/__init__.py wget -q -O configs/config/quick_1gpu_resnet50_simclr.yaml https://dl.fbaipublicfiles.com/vissl/tutorials/configs/quick_1gpu_resnet50_simclr.yaml wget -q https://dl.fbaipublicfiles.com/vissl/tutorials/run_distributed_engines.py
Try training SimCLR model on 1-gpu:
python3 run_distributed_engines.py config=quick_1gpu_resnet50_simclr config.DATA.TRAIN.DATA_SOURCES=[synthetic]