TG Telegram Group & Channel
Gradient Dude | United States America (US)
Create: Update:

Can Vision Transformers Learn without Natural Images? YES!🔥

This is very exciting. It was shown that we can pretrain Vision Transformers purely on synthetic fractal dataset w/o any manual annotations and achieve similar performance on downstream tasks as self-supervised pretraining on ImageNet and similar performance to supervised pretraining on other datasets like Places.

Authors also pretrained regular ResNets on their fractal synthetic data. It works pretty well too, although DeiT Transformers are better.

Overall, this is good news. If we can come up with clever approaches to synthetic data generation, then we can generate arbitrarily large datasets for free!

📖 Paper
🌐 Proj page
📦 Fractal dataset is described in this paper.

Can Vision Transformers Learn without Natural Images? YES!🔥

This is very exciting. It was shown that we can pretrain Vision Transformers purely on synthetic fractal dataset w/o any manual annotations and achieve similar performance on downstream tasks as self-supervised pretraining on ImageNet and similar performance to supervised pretraining on other datasets like Places.

Authors also pretrained regular ResNets on their fractal synthetic data. It works pretty well too, although DeiT Transformers are better.

Overall, this is good news. If we can come up with clever approaches to synthetic data generation, then we can generate arbitrarily large datasets for free!

📖 Paper
🌐 Proj page
📦 Fractal dataset is described in this paper.


>>Click here to continue<<

Gradient Dude










Share with your best friend
VIEW MORE

United States America Popular Telegram Group (US)