π§ gowdaman-student-emotion-detection
Model description
This is a fine-tuned version of trpakov/vit-face-expression
specifically adapted for analyzing student engagement and understanding during online or in-person educational sessions.
π Model Overview
- Model Type: Vision Transformer (ViT)
- Base Model:
trpakov/vit-face-expression
- Task: Binary emotion classification for educational sentiment analysis
- Input: Student face image (RGB)
- Output Classes:
Understand
: The student appears to understand the concept.Not Understand
: The student appears to be confused or not following the lesson.
π― Use Case
This model is designed for automatic sentiment detection in educational environments, helping instructors evaluate real-time or post-session student engagement by analyzing facial expressions.
It is ideal for:
- Online course engagement monitoring
- Intelligent Learning Management Systems (LMS)
- Post-lecture video analysis for feedback and insights
π§ͺ Training Details
- Fine-tuned on a custom dataset of student facial expressions captured during course sessions
- Labels were manually annotated as
Understand
orNot Understand
based on visual indicators of comprehension
π₯ Input Format
- Image: A single student facial image (preferably frontal and well-lit)
- Size: Automatically resized to match the input size expected by the ViT model
π€ Output
{
"label": "Understand",
"confidence": 0.87
}
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for gowdaman/gowdaman-emotion-detection
Base model
trpakov/vit-face-expression