🧠 gowdaman-student-emotion-detection

Model description

This is a fine-tuned version of trpakov/vit-face-expression specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.

πŸ“Œ Model Overview

  • Model Type: Vision Transformer (ViT)
  • Base Model: trpakov/vit-face-expression
  • Task: Binary emotion classification for educational sentiment analysis
  • Input: Student face image (RGB)
  • Output Classes:
    • Understand: The student appears to understand the concept.
    • Not Understand: The student appears to be confused or not following the lesson.

🎯 Use Case

This model is designed for automatic sentiment detection in educational environments, helping instructors evaluate real-time or post-session student engagement by analyzing facial expressions.

It is ideal for:

  • Online course engagement monitoring
  • Intelligent Learning Management Systems (LMS)
  • Post-lecture video analysis for feedback and insights

πŸ§ͺ Training Details

  • Fine-tuned on a custom dataset of student facial expressions captured during course sessions
  • Labels were manually annotated as Understand or Not Understand based on visual indicators of comprehension

πŸ“₯ Input Format

  • Image: A single student facial image (preferably frontal and well-lit)
  • Size: Automatically resized to match the input size expected by the ViT model

πŸ“€ Output

{
  "label": "Understand",
  "confidence": 0.87
}
Downloads last month
1
Safetensors
Model size
85.8M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gowdaman/gowdaman-emotion-detection

Finetuned
(1)
this model