Benchmarking Self-Supervised Learning on STL-10: SimCLR Vs BYOL

Authors

Siddharth Maurya

Dept of Software Engineering, Delhi Technological University (India)

Vijay Kumar

Dept of Software Engineering, Delhi Technological University (India)

Article Information

DOI: 10.51584/IJRIAS.2025.10120050

Subject Category: Artificial Intelligence

Volume/Issue: 10/12 | Page No: 649-659

Publication Timeline

Submitted: 2025-12-25

Accepted: 2025-12-30

Published: 2026-01-15

Abstract

Self-supervised learning (SSL) has emerged as an effective paradigm for learning visual representations without reliance on labeled data. This study presents a controlled benchmark of two widely adopted SSL methods, SimCLR and BYOL, evaluated on the STL-10 dataset. Both methods are implemented using an identical ResNet-18 backbone and trained under matched computational and optimization settings. Representation quality is assessed through linear probing and k-NN classification. Under these constraints, SimCLR demonstrates stronger performance than BYOL, achieving a linear probe accuracy of 71.21% compared to 69.90% for BYOL. These results emphasize practical considerations in SSL benchmarking and highlight performance trade-offs that arise under resource-limited training regimes.

Keywords

BYOL, SimCLR, SSL

Downloads

References

1. Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. International Conference on Machine Learning (ICML).(2020) [Google Scholar] [Crossref]

2. Grill, J.-B., Strub, F., Altché, F., et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. NeurIPS. (2020) [Google Scholar] [Crossref]

3. Chen, X., & He, K. Exploring Simple Siamese Representation Learning. CVPR. (SimSiam) (2021) pg (3-7) [Google Scholar] [Crossref]

4. Caron, M., Misra, I., Mairal, J., et al. Emerging Properties in Self-Supervised Vision Transformers. ICCV. (DINO) (2021) pg (1-6). [Google Scholar] [Crossref]

5. Wenwen Qiang, Jingyao Wang, Changwen Zheng et al. On the Universality of Self-Supervised Learning. arXiv:2405.01053v5 . May 2025. [Google Scholar] [Crossref]

6. Dey, D., Edher, H., Rao, L.M., Saini, D.K. (2026). Self-supervised Learning in Image Classification. In: Senjyu, T., So-In, C., Joshi, A. (eds) Smart Trends in Computing and Communications. SmartCom 2025. Lecture Notes in Networks and Systems, vol 1464. Springer, Singapore. https://doi.org/10.1007/978-981-96-7520-3_23 [Google Scholar] [Crossref]

7. A. Khan, S. AlBarri and M. A. Manzoor, "Contrastive Self-Supervised Learning: A Survey on Different Architectures," 2022 2nd International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 2022, pp. 1-6 [Google Scholar] [Crossref]

8. Purushwalkam, S., & Gupta, A. Demystifying Contrastive Self-Supervised Learning. ECCV Workshops. (2020) pg (1-4). [Google Scholar] [Crossref]

9. He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. CVPR. (2020) pg(1-8). [Google Scholar] [Crossref]

10. Oord, A. van den, Li, Y., & Vinyals, O. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv:1807.03748. (2018) [Google Scholar] [Crossref]

11. Addepalli, S., Bhogale, K., Dey, P., Babu, R.V. Towards Efficient and Effective Self-supervised Learning of Visual Representations. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13691. Springer, Cham. https://doi.org/10.1007/978-3-031-19821-2_30 (2022) [Google Scholar] [Crossref]

12. Markus Marks, Manuel Knott, Neehar Kondapaneni, Elijah Cole, Thijs Defraeye, Fernando Perez-Cruz, Pietro Perona, “A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification” Jul 2024. [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles