A lightweight computer vision project trained on the Kaggle Rock–Paper–Scissors dataset. Detects gestures in real-time with YOLOv8 — optimized for accuracy, speed, and simplicity.
Predicts rock, paper, and scissors gestures from images or live video using YOLOv8 and OpenCV.
Fine-tuned YOLOv8n model on Kaggle’s Rock–Paper–Scissors dataset with data augmentation for higher precision.
Lightweight and modular — can be integrated into gameplay logic or visual dashboards easily.