Rock Paper ScissorsDetection with YOLOv8

A lightweight computer vision project trained on the Kaggle Rock–Paper–Scissors dataset. Detects gestures in real-time with YOLOv8 — optimized for accuracy, speed, and simplicity.

Project Highlights

Real-Time Detection

Predicts rock, paper, and scissors gestures from images or live video using YOLOv8 and OpenCV.

Optimized Training

Fine-tuned YOLOv8n model on Kaggle’s Rock–Paper–Scissors dataset with data augmentation for higher precision.

Extendable Architecture

Lightweight and modular — can be integrated into gameplay logic or visual dashboards easily.

Technology Stack

Built with Modern Tools

YOLOv8PythonOpenCVPyTorchNumPy
Project Roadmap

What’s Next

  • 1
    Integrate real-time gameplay logic
  • 2
    Add a visualization dashboard with Next.js
  • 3
    Enhance model performance for angled or low-light gestures