Deep Evidential Action Recognition
-
Bao
|
Qi and Kong
|
Wentao and Yu
|
Yu
-
https://arxiv.org/abs/2107.10161
|
https://www.rit.edu/actionlab/dear
They proposed the Deep Evidential Action Recognition (DEAR) method to recognize actions in an open world. Specifically, they formulate the action recognition problem from the evidential deep learning (EDL) perspective and propose a novel model calibration method to regularize the EDL training. Besides, to mitigate the static bias of video representation, they propose a plug-and-play module to debias the learned representation through contrastive learning. The DEAR model trained on UCF-101 dataset achieves significant and consistent performance gains based on multiple action recognition models, i.e., I3D, TSM, SlowFast, TPN, with HMDB-51 or MiT-v2 dataset as the unknown.
Input variables : UCF-101 dataset, MMAction2 codebase
Output Variables : Actions Defined in UCF-101 Dataset
Metrics to Monitor
Statistical
|
:
|
Somers D |
Accuracy |
Precision and Recall |
Confusion Matrix |
F1 Score |
Roc and Auc |
Prevalence |
Detection Rate |
Balanced Accuracy |
Cohen's Kappa |
Concordance |
Gini Coefficent |
KS Statistic |
Youden's J Index
|
Infrastructure
|
:
|
Log Bytes |
Logging/User/IAMPolicy |
Logging/User/VPN |
CPU Utilization |
Memory Usage |
Error Count |
Prediction Count |
Prediction Latencies |
Private Endpoint Prediction Latencies |
Private Endpoint Response Count
|
Visit Model :
github.com