π§ Valence Regressor (MLP)
A lightweight MLP model trained to predict valence from frame-level emotion probabilities (7 basic emotions).
Optimized for low-emotional, imbalanced datasets such as research interviews or controlled lab experiments.
π Model Overview
- Input: 7 emotion probabilities (Neutral, Happy, Sad, Angry, Surprised, Scared, Disgusted)
- Output: Continuous valence score β [-1, 1]
- Architecture:
- Input β Linear(7 β 64) β ReLU
- Linear(64 β 32) β ReLU
- Linear(32 β 1) β Output
ποΈββοΈ Training Details
- Dataset: Normalized per-frame predictions from an FER model
- Labels: Valence values derived from emotion vectors (not manually annotated)
- Task: Regression
- Epochs: 20
- Optimizer: Adam
- Loss: MSE (Mean Squared Error)
Note: Sequence dynamics were not used β model trained on static frame-level data.
π Evaluation
Set | MAE | RMSE | RΒ² |
---|---|---|---|
Validation | β 0.018 | 0.033 | 0.9928 |
Test | β | 0.00035 | 0.9922 |
β οΈ Notes & Limitations
- Labels were generated from emotion distributions, not from ground-truth manual annotations.
- Best suited for:
- Low-expressiveness environments (e.g. astronauts, patients, lab participants)
- Scenarios with few facial changes and class imbalance
- Not ideal for:
- High-emotional, spontaneous data (e.g. movies, TikTok videos)
- Multimodal fusion without proper normalization
π§© Use Cases
- Affective computing
- Human-computer interaction (HCI)
- Emotion-aware agents
- Cognitive workload estimation
- Social robotics & soft biometrics
π‘ Example Usage
import torch
from model import MLPRegressor # Define as per training
model = MLPRegressor(input_size=7)
model.load_state_dict(torch.load("pytorch_model.bin"))
model.eval()
emotion_probs = torch.tensor([[0.01, 0.95, 0.02, 0.01, 0.005, 0.003, 0.001]]) # [batch_size, 7]
valence = model(emotion_probs)
print(valence.item()) # e.g. 0.93
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support